Submission to the Select Committee on Adopting Artificial Intelligence (AI)

Recommendations

The community services sector is a key space for AI development and progress, the use of AI could improve service delivery, client experience and workforce wellbeing where ethical and responsible adoption of AI can mitigate the associated risks.

It is critical that the risks and significant harms that can arise with AI adoption are identified and addressed before this can occur.

Appropriate and equitable investment (funding and resource allocation) will be required from Government to enable capacity building and technological development within the community services sector, including co-design with service users.

Summary

The community services (or not-for-profit) sector provides critical frontline services to diverse individuals, families and communities experience complex and significant life challenges and in need of social assistance and support. The types of services that the sector delivers includes but is not limited to family and domestic violence, housing and homelessness, disability, counselling and support, emergency relief and alcohol and other drugs. Some community services are already using AI to develop new and effective programs for their organisations and their clients, including Justice Connect and a First Nations led initiative in Kakadu, discussed further in this submission. AI offers innovation and efficiencies that can significantly benefit the community service sector and the people it supports. It is critical however that community services are sufficiently equipped and empowered to engage with and use AI .while effectively mitigating the associated risks. Our submission, developed with insights from the sector including consultations with RISE Network, Wanslea, and Anglicare WA, focuses on the key risks that need mitigation as well as specific opportunities where AI could be useful in the community services sector. WACOSS also acknowledges and supports the recommendations expressed in ACOSS’ submission to the committee.

Response to Terms of Reference b.: risks and harms arising from the adoption of AI technologies, including bias, discrimination and error;

As identified in the terms of reference, the replication of bias and discrimination is a major risk of adopting AI technologies and could cause significant harms, particularly in the community services sector. AI relies on the information input, which is susceptible to biases of human society, as well as inaccuracies and misinformation. It also may not be comprehensive, particularly for data concerning marginalised groups. There are several demonstrated instances of AI replicating real world discrimination, some of which are described in the Australian Government’s 2023 discussion paper, ‘Safe and Responsible AI in Australia’.[1] These documented harms include AI recruitment programs replicating sexism and preferencing male candidates over female and racial discrimination in AI programs assessing recidivism risk.

These risks can and must be mitigated against. Data input to AI programs should be accurate, comprehensive and rich, as well as clearly defined and critically assessed for bias. The safety of an AI system also depends on its training and ongoing testing and development.[2] This process takes skill, time and resources. It also matters who is developing the program and who is determining the training data. If the program is intended to benefit marginalised communities, their perspectives should guide this process so that it best suits their needs. Demonstrating the benefits of comprehensive datasets and review and adaptation Justice Connect’s AI model, discussed in further under ToR d., wherein further training the AI model with language samples from more diverse and marginalised cohorts improved accuracy and effectiveness in identifying appropriate and relevant types of legal support that people needed based on their everyday language descriptions. Notably, this example also illustrates the resourcing and investment required to adopt AI, including identifying and engaging trusted partners for collaboration such as tech experts to realise these types of initiatives.

There are also privacy and security concerns of AI use, particularly in relation to personal data which the community services sector holds and uses due to the nature of services.  AI systems also rely on extensive datasets which a single service may not possess, creating another barrier to adoption. These risks are realistically mitigatable through strict parameters and guidelines around datasets for AI both in building AI models and in how AI processes and stores personal data as well as broader data management and security frameworks and principles. Collaboration can also help mitigate this risk through engaging trusted tech experts but also sharing lessons learned and insights with and across sectors to continuously improve and enhance these frameworks as well as harness AI for service improvement and community outcomes.

Another risk that AI may pose is to First Nations data sovereignty, which is a priority reform area of the Closing the Gap Agreement. AI systems that draw indiscriminately from internet sources may draw on First Nations owned data without consent, knowledge or recognition. Data sovereignty is a focus of the Closing the Gap Agreement because data has long been collected and interpreted about First Nations peoples without First Nations control or input and has led to deficit focussed narratives, inaccuracies and withholding of self-determination. AI policies, frameworks and programs must be developed with First Nations peoples to prioritise and ensure self-determination and reflect the commitment to First Nations data sovereignty.

The abovementioned risks demonstrate the need for ethical and responsible AI use. The community services sector is well-placed to develop and adopt ethical AI use and practice as the sector prioritises human and social justice centred interventions and design, privacy, accessibility, and inclusion[3]. Risks around unethical use can be mitigated through ethical guidelines and frameworks, including transparency and accountability. Developing and implementing appropriate regulatory mechanisms is also an important undertaking to ensure accountability. Particularly, government has a role to play in holding tech companies and AI platform developers to account in prioritising ethical use and minimising potential for harm.

As outlined earlier, adopting AI  requires adequate capacity building and resourcing. Furthermore, services across the sector vary in their digital capability and available resources, both human and technological, with some already behind, using outdated and inefficient platforms and a significantly larger gap to fill before they can start considering how to get ahead of emerging technologies such as AI. Because of this, it is vital that community services are equitably and appropriately resourced and informed so that they can build safe and effective AI systems to assist their support of community members often in entrenched disadvantage

Another risk, not unique to AI, is change management processes and fear of emerging technology and systems, including AI. This applies to organisations, staff, clients and funders. This fear may limit innovation and progress. It also may be a barrier to clients accessing necessary services. Starting with AI literacy and promoting understanding of AI to empower informed choice is needed to mitigate this risk. At the heart of this is the understanding that AI is a tool, that uses a data set, is a processing engine, and needs to be taught i.e. requires effective human input to produce effective or useful outputs.

While it is critical to carefully assess and appropriately mitigate the risks and harms from adopting AI technologies, there is also a risk for the sector in NOT adopting these technologies. The community services sector collectively holding vast amounts of personal and powerful data (including learnings and outcomes from programs and services), facing increasing demand for services amid a cost-of-living crisis, remains among the most vulnerable to cybersecurity threats. It also faces increasing challenges attracting and retaining its workforce as it remains underfunded and competes with the corporate and public sector who offer more in terms of wages and resources.

The sector also has a lot of ‘catching up’ to do when it comes to digital capability, with few organisations having a digital strategy in place or agreeing that their systems enable understanding of service impact, and even fewer having a clear plan to improve cybersecurity[4]. This means that if the sector can’t build capacity and digital capability, including adopting AI technologies, it risks falling further behind. In turn, the communities and individuals that the community services sector serves risk further marginalisation, particularly in relation to digital exclusion.

Response to Terms of Reference d.: opportunities to adopt AI in ways that benefit citizens, the environment and/or economic growth, for example in health and climate management;

Responsible and ethical AI use presents exciting opportunities for the community services sector which can improve outcomes for communities and individuals that the sector serves, including people experiencing entrenched disadvantage, it can benefit the workforce, organisational capabilities and resourcing. There are numerous examples where digital transformation and harnessing new technologies build workforce capacity; automation, better data management and impact measurement; and increase efficiencies for overloaded frontline workers across the sector, as illustrated in the InfoXchange 2023 annual report.

AI technology, harnessed ethically and responsibly, presents the opportunity for community services to increase efficiencies. The community services sector is often under-resourced and over-capacity. It delivers critical community and person-centred services where human connection is essential. AI is incapable of replacing this human connection and person-centred approach and therefore unlikely to replace human roles in service delivery. Instead, AI would enable workers, particularly frontline service workers, to invest less time on repetitive and time consuming administrative or ‘low-value’ tasks that have little benefit for the worker or the service user; and more time doing the work that ‘matters’ and benefits from human connection. This includes  direct service delivery that is ‘high-value’ and has beneficial impact for the people and communities who need it most. Further, AI presents the opportunity to ‘deepen’ relationships between service users and service providers by collecting and synthesising valuable information that can enable service providers to understand a person’s situation and needs. It can also provide interim resources and supports for service users in a ’waiting period’ that monitors their changing situation so that they can be referred to other appropriate services or prioritised should they experience a crisis or significant need during this period. Because AI is interactive, it presents an opportunity to create digital spaces and services that are more accessible, a crucial undertaking in a world where services, including essential government services, are increasingly digital with many experiencing digital exclusion.

AI adoption within the community services sector can also enhance impact measurement/monitoring and evaluation capabilities through automation of data entry and processes that would reduce the chance for human error. AI driven automation can support capacity building in data analysis and deriving valuable insights from organisational and external data.  Improved efficiencies and monitoring and evaluation capabilities through AI in turn have beneficial implications and outcomes for communities, for the workforce, for the sector, and for the economy.

There are some compelling examples from the sector that demonstrate this:

Justice Connect’s AI model[5]:

  • Justice Connect are a not-for-profit organisation that design social justice focused high impact interventions to increase access to legal support.
  • Problem: In order to connect people with appropriate legal support, Justice Connect need to identify the type of legal support needed, which people seeking help online often struggle to accurately articulate.
  • AI-centred solution: Using thousands of natural language samples from their online intake system that pro-bono lawyers annotated, Justice Connect developed a natural language processing AI, in collaboration with Melbourne’s School of Computing Science. This AI tool understands and can categorise people’s legal issues based on their everyday language descriptions.
  • Impact: This AI model breaks down barriers that people seeking legal help face in appropriately and accurately linking into the types of legal support or services that they need. Crucially, Justice Connect found that when they used additional language samples from diverse and often increasingly marginalised cohorts that face additional barriers to accessing services, the AI model’s accuracy improved for all users. This reaffirms the importance of combatting bias and ethical and responsible development and use of AI tools and the potential beneficial impacts it can have.

First Nations led project – magpie geese[6]:

  • For First Nations led community services, AI offers an opportunity to utilise cultural and community knowledge and promote data sovereignty and self-determination. An example of ethical use of AI in this context is the partnership between Kakadu Traditional Owners, Kakadu National Park Rangers and scientists.
  • Problem: Para grass was overtaking the wetlands and causing Magpie Geese to leave the area, despite ongoing work by Rangers and Traditional Owners to control the para grass.
  • Solution: Collecting information about the land and Magpie Geese, including knowledge of Traditional Owners not otherwise written down, and utilising AI to monitor progress.
  • Impact: This project was Indigenous-led from the beginning and utilised local expertise. It has ultimately resulted in a positive impact on Magpie Geese numbers.

It is inevitable that the sector needs to adapt to emerging technologies, including AI, both to seize the opportunities but also address the risk and harms it poses to the people we serve and the sector. However, as discussed throughout this submission, adopting AI technologies ethically and responsibly to enhance efficiencies and ultimately service delivery and outcomes for service users is time and resource intensive. This will require investments from both a cultural and resource perspective, including a commitment to change management and digital transformation. Community service organisations are under-resourced and over-capacity as a baseline, and will also face barriers to adopting AI including reluctance to change, lower levels of digital skills or baseline data management and lack cybersecurity mechanisms to enable adoption of new technologies like AI.

Conversely, the community services sector has a significant opportunity to benefit from adopting AI to support and automate low-value time consuming tasks which then frees up the workforce to focus on high value work that achieves outcomes for people the sector serves. The sector embodies values that create an ideal environment for ethical and responsible community and positive outcome-driven AI adoption, but it will need support and equitable investment to fund and resource capacity building to adopt AI technologies within the sector.

 

For more information about this Submission, please contact Rachel Siewert, Deputy CEO, [email protected], 08 6381 5300

 

References

[1] https://apo.org.au/sites/default/files/resource-files/apo-nid322938.pdf

[2] https://apo.org.au/sites/default/files/resource-files/apo-nid322938.pdf

[3] https://www.communitydirectors.com.au/articles/sector-leaders-say-nfps-can-be-the-ethical-backbone-of-artificial-intelligence

[4] https://www.infoxchange.org/sites/default/files/infoxchange_2023_annualreport_screen.pdf

[5] https://justiceconnect.org.au/about/innovation/legal-help-experience/ai-project/

[6] https://www.csiro.au/en/news/all/articles/2019/november/magpie-geese-return-ethical-ai-indigenous-knowledge

In this document

Share this page

Recent Policy Updates

Follow Us