Artificial intelligence and trust in planning

In late 2022, artificial intelligence (AI) interface ChatGPT sent shockwaves through society as people, for the first time, meaningfully considered what a post-AI society might look like. Questions were raised around what these technologies would mean for a wide range of professions, and planning was no exception.

There is little debate that tools such as ChatGPT, known as generative AI, have potential to change the way in which planners work. Rather than simply identify or classify text and images, generative AI can create text and images. Herein lies the breakthrough. To understand what such a technology may mean for trust in the planning system, we must first analyse what kind of applications generative AI may have in planning.

What could AI do for planning?

At a high level, AI could offer a consistent and reliable assessment function for development, activated insights into lifestyle and behavioural patterns for strategic planning, and flexible community engagement options; all of which may improve trust in planning. But it seemed only appropriate to put this question to the technology itself. When prompted about what AI might mean for the future of the urban planning profession, ChatGPT listed the following applications:1
• Predictive modelling;
• Traffic management;
• Energy management;
• Environmental planning; and
• Disaster response.

ChatGPT here provides a solid overview of avenues for the application of generative AI, but it may go even further. One can imagine council’s duty planner roster being replaced with a chatbot able to translate complex information from relevant policies that it had been trained on into plain English. There could be a text-based interface built atop a digital twin that could model scenarios explained in text, dramatically expanding the accessibility of such complex platforms.

What are the issues with AI?

Despite the extensive potential of generative AI, there also exists a multitude of emerging issues. To appreciate the source of these issues, it is important to understand the basic underpinnings of generative AI. The AI models dominating headlines such as ChatGPT, Bing AI and Google Bard are described as large language models (LLMs). Effectively, as the amount of data used to train the models is so large, the models can both sound human-like in their delivery and appear to be creative.2 Despite this appearance, rather than having conscious thoughts, generative AI simply reforms, recalls and reproduces answers based on millions of human-written sources. AI image generators such as DALL-E work in the same way, simply “swapping text for pixels”.3

Thus, human trends, preferences and biases permeate through results. This is demonstrated well by the images generated for this article, which reflect a highly stereotypical ‘future city’ aesthetic that no doubt dominated the relevant training material.

As a result of the LLM approach, worrying trends of engrained bias have already been detected in AI engines. From a planning perspective, for instance, our historical tendency to plan cities around cars and men may threaten to undo progress made towards a fairer and more equitable city were AI to inform decision-making. Further, generative AI is highly effective at producing and disseminating false information. Presented with confidence without citations, this is no doubt a dangerous and counter-productive trend.

What does AI mean for trust in planning?

Trust cannot be underestimated as a factor in the success of AI generally, AI in planning, and planning as a whole. Forbes Magazine has called trust AI’s biggest problem,4 and the negative feedback loop of trust in planning is well established.5 With these factors in mind, trust can be explored through three key pillars – strategic planning, development assessment, and community engagement.

Strategic planning

The role of AI in strategic planning is both the simplest and most progressed. Already, the embrace of a data-led approach, made possible by Geographic Information Systems (GIS) and other urban informatics, has no doubt led to an increase in trust in strategic decision making. It seems logical then that this trajectory can continue, aided by AI and machine learning programs that enhance our ability to collect, understand, and activate data. AI can and will continue to support human decision-making in this space.

Development assessment

In the realm of development assessment there is certainly potential for trust in the planning system to be increased somewhat through the implementation of AI. Generative AI models could be constructed to undertake code assessments, with a ‘check box’ style likely to play to AI’s strengths. However, the potential for more sophisticated AI assessment of discretionary matters should not be discounted and remains within the realm of possibility. Where such implementation may encounter problems is the issue of accountability. Who is accountable for a poor decision made, or incorrect advice given by AI? How would AI involvement be treated in court? Is it fair or appropriate for organisations and/ or individuals to be held accountable for the actions of a self-directing algorithm? These questions are hard to answer at this early stage in AI implementation but underscore the extent of complex hurdles still facing the technology.

Community engagement

A recurring and ever-present issue in community engagement is the fundamental knowledge gap that exists between members of the community and technical practitioners. Within this gap, misinformation spreads and trust in planning is weakened. What if AI could offer a chatbot equipped with the knowledge of a multitude of technical disciplines, ready to explain terminology, best practice and process in an instant? Calling back to the example of an AI duty planner, this function could increase trust by explaining and simplifying the planning system in a way unique and appropriate to each user. This greater understanding is likely to increase trust. While some counter that the lack of human interaction would proportionally reduce trust, I’d instead suggest that a fact-based chatbot would probably appear to the average user much the same as Google does – ready to answer all their planning queries. In this sense, consistency and response time may go some way to selling the concept to stakeholders. However, one can also imagine the difficulty an AI chatbot may have linking a community query in plain English to a specific and legalistic control embedded within an Environmental Planning Instrument.

Conclusion

When introduced into a data-led and human-checked planning process, AI may have a net positive impact on trust in planning, but if left to answer questions on its own accord it risks spiralling into worsening misinformation and mistrust. None of the above discussion is to suggest that planners will be somehow replaced by AI. At least for the technology of today, the politics and emotion that permeate this industry are far too complex and inconsistent to be meaningfully catered for entirely by AI. Even ChatGPT insisted that “it's important to remember that AI is just a tool, and it should be used in conjunction with human expertise and judgement”6 when prompted about the role of AI in planning. Nevertheless, AI is here and here to stay – it’s time that we as planners learned to use it. 

16th June 2023

This article originally appeared in New Planner – the journal of the New South Wales planning profession – published by the Planning Institute of Australia. For more information, please visit: https://lnkd.in/dzcjN32X

Endnotes
1. “What does artificial intelligence mean for the future of the urban planning profession?” prompt. ChatGPT, 13 Feb. version, OpenAI, 10th April 2023, chat.openai.com/chat.
2. Vincent, J 2023, ‘7 problems facing Bing, Bard, and the future of AI search’, The Verge, 10th February 2023.
3. ibid.
4. Chintada, S 2021, ‘Addressing AI’s Biggest Problem: Trust’, Forbes Magazine, 25th October 2021.
5. See: https://www.planning.org.au/documents/ item/10486
6. “What does artificial intelligence mean for the future of the urban planning profession?” prompt. ChatGPT, 13 Feb. version, OpenAI, 10th April 2023, chat.openai.com/chat.