Today 691

Yesterday 687

All 39420845

Monday, 29.04.2024
eGovernment Forschung seit 2001 | eGovernment Research since 2001

Retrieval-augmented generation is emerging as a way to help city halls get closer to more reliable, real-time generative AI usage, reports Clay Garner.

A recent survey found that 96 per cent of mayors demonstrate at least some level of interest in leveraging artificial intelligence (AI). Yet, in their quest to supercharge government services with this technology, city leaders find themselves at the crossroads of transformative potential and serious challenges.

Paramount among these is the issue of data integrity and the risk of AI generating misleading or entirely fabricated information – a phenomenon that city leaders and residents will find unacceptable. Against this backdrop, retrieval-augmented generation (RAG) has emerged as a technical method that could help get city halls closer to more reliable, real-time generative AI usage.

Why RAG matters

Current generative AI systems, particularly large language models (LLMs), have been criticised for their tendency to produce outputs that, despite appearing plausible, may not be factually accurate. Moreover, the behaviour of LLMs like OpenAI’s GPT-4 – why they perform the way they do – is still not well-understood, which means that it is difficult to standardise their outputs. This inconsistency poses risks in city governance where reliance on inaccurate AI-generated information could misguide public policies, misallocate resources, breach privacy, or even precipitate disaster, eroding public trust and causing potential harm.

To counteract the inaccuracies that can be common in LLM outputs, RAG is increasingly viewed as part of the solution along with fine-tuning and prompt engineering. This method can improve the precision and applicability of AI-generated content by first accessing and integrating specific details from a reliable, pre-established data source, such as an internal wiki or SQL database, prior to producing responses. By grounding AI responses in verified content, RAG can reduce the occurrence of “hallucinations,” or instances where AI fabricates responses based on patterns learned during training, rather than concrete data.

RAG’s implications for smart cities

As city leaders explore the adoption of generative AI, they should consider how RAG could move them toward a level of data accuracy requisite for implementation.

With resident-facing AI service bots, a key concern is the dissemination of inaccurate information (as recently seen with an Air Canada chatbot gaffe), which could erode public trust and create confusion among residents. RAG can address this issue by cross-referencing the AI’s responses with verified information in near real-time, boosting reliability and relevance. For instance, when residents inquire about waste collection schedules, RAG could help the bot’s response reflect the most current municipal schedules and regulations, thereby maintaining the integrity of service delivery and bolstering public confidence in digital municipal platforms.

This precision in information retrieval and verification is also critical in policy development. The challenge here lies in basing AI recommendations on the most current and relevant data. By validating sourced data against a robust repository of legal documents, historical policy data, and recent studies, RAG ensures that policymakers are equipped with well-founded suggestions. For example, when drafting proposals on urban mobility, the system would cross-verify the AI’s data against up-to-date traffic studies and legal frameworks, leading to more informed, relevant policies that genuinely address community needs.

The transition to urban planning highlights RAG’s role in processing complex datasets, such as demographic trends and environmental impact studies, to support decision-making. The inherent challenge is to ensure comprehensive analyses that accurately reflect the latest conditions and regulations. RAG addresses this by verifying the data used in the AI’s analysis, ensuring decisions are based on accurate, current information. This methodological verification ensures that proposed developments are both feasible and in alignment with community expectations and sustainability goals, enhancing the strategic development of urban environments.

Finally, in emergency response coordination, the imperative for rapid and accurate information dissemination presents a unique challenge. RAG enhances the reliability of generative AI in this context by filtering synthesised information from various sources through a validation process. This ensures that advice and updates are consistent with official reports and real-time data from credible sources. For instance, in natural disasters, RAG’s verification process ensures that evacuation routes or shelter locations provided by AI systems are in line with the latest directives from emergency services and traffic conditions, thus safeguarding public safety and enhancing the effectiveness of emergency management.

AI implementation going forward

As cities consider the benefits of RAG in their AI implementation strategy, pilot programmes focusing on specific applications, such as public safety announcements or urban planning consultations, can offer valuable insights into the method’s effectiveness and areas for improvement. Along with other LLM accuracy-improving techniques like prompt engineering and model fine-tuning, RAG on its own should probably not be viewed as a panacea to the problem of AI hallucination.

---

Autor(en)/Author(s): Clay Garner

Quelle/Source: Smart Cities World, 27.03.2024

Bitte besuchen Sie/Please visit:

Go to top