How can we use AI responsibly and ethically at Forward Action?

AI seems to be in every conversation, conference and blog post these days – and for good reason. It’s hard to underestimate the potential impact of this emerging technology on all our lives. We have something to add into the mix – and that’s how we’re working to use AI in a responsible and ethical way at Forward Action.

As an agency that works with partners to drive progressive change, it’s our responsibility to consider how we best harness the power of this ever-developing technology, while making sure we stick to our values and use it for the greater good.

We’re already busy using AI in three key ways:

  • To make efficiencies in our daily work – for example, you might notice an AI note-taker in our calls these days
  • To improve our services for partners by incorporating AI into our work together – from speedily generating a range of innovative ideas to interpreting results data, always with our partners’ permission
  • To build our products and tools by digging into the opportunities provided by AI to improve user experience and create effective and exciting new campaign tactics.

As keen as we are to engage with this technology, we’re even keener to make sure we’re entering our AI journey with our eyes wide open to the ethical implications. And we’re making every effort to mitigate them wherever we can. 

Here are some key commitments we’ve made in our use of AI. We’re sharing them publicly so our partners can trust our approach, so other organisations can borrow our ideas, and so anyone reading this can hold us accountable too:

1: Always be aware of AI’s bias

It’s been well documented that AI is not an objective source of information. Racial bias and other forms of discrimination are embedded into the technology due to the indiscriminate scraping of the internet that informs its intelligence. By drawing from the whole of the internet for its information, AI completely uncritically reproduces the discrimination, privileging and bias that’s present there. 

At Forward Action, we’ll always have this front of mind when using AI, and will make sure we’re interrogating any results. While we know we won’t be able to eradicate bias, just as we can’t completely eradicate our own, we will work hard to ensure we don’t reproduce what’s coming from AI.

2: Never use AI wholesale, and never without our partner’s permission

While we think that AI has a lot of potential for opening up the creative process and innovating the digital campaigning offering, it cannot do this on its own. Our highly skilled team will always be translating results, questioning outcomes, and building on the output of AI before sharing our work with partners and the public – be it a report, an email, a piece of code, an image or anything else. 

We know that many partners are having their own conversations about the ethical issues with AI, and not everyone is comfortable yet with us using it. If we are planning on engaging AI to produce any of our work with you, we’ll always ask first and ensure you’re fully informed of the role we see it playing in our partnership, and why. If you’re not comfortable with us using it, we won’t.

3: Use paid versions of the tools and build our own GPTs

 

To make sure we’re never using partner data to train AI models, we’ve invested in paid versions of the tools, with closed team spaces to work in.  

We’re already working on building our own GPTs to create useful tools for partners and ourselves, while protecting all of our data. Any partner whose data forms part of building these models will be asked for consent.

4: Use AI generated images carefully and clearly

AI models are trained on copy-righted text and images, and while we’re not able to credit the individual contributors to any output, we will always tag images as ‘AI generated’ and ensure that we’re not presenting AI content as our own or suggesting a realistic looking image is genuine. 

Any content created will have our skilled team’s oversight, and we’ll also look to balance our output by working with designers, partner image banks and stock imagery, where attribution is much clearer. 

And finally – it should go without saying, but we will never use AI to generate or enhance images of our partners’ service users, supporters or beneficiaries.

5: Experiment, share learnings and empower the sector

To win as a sector, we all need to be embracing the power of AI. It has the potential to give us the edge we need to have impact and influence, but if we fail to move quickly enough, we’ll miss a huge opportunity for those we represent and the change we all want to see. 

We have lots to learn and the technology is developing every single day. So we’ve set out to focus on AI to make sure we stay on top of all of its potential, how best to harness it, and how we can help the wider sector better utilise it too. As part of this we need to be open to experimenting with AI and pushing its boundaries, exploring the new and unexpected ways it can help us all achieve our goals.

We will work with partners on specific projects and initiatives, but we’ll also make sure we’re sharing our learnings and contributing to the wider sector by hosting regular webinars, posting blogs and being available to answer any of your questions! We know there are lots of organisations and agencies already doing this, so we’ll look to contribute in a targeted way that adds unique value and ultimately helps the whole sector build confidence in using AI in order to strengthen our collective power.

 

Chances are that by the time we’ve posted this blog, AI capabilities will have evolved and our commitments to using it ethically will need to keep up. One of the next big questions we’re looking into is the environmental impact of AI and how other organisations are working to mitigate this. 

We’re constantly looking to improve our approach, collaborate with people as they also start to experiment and see how we can do things better.

We’ll update you with any changes we make, so keep an eye out for our future AI blogs and webinars as we look to share what we’ve learnt along the way.