Editor’s Note: Tim Moreland is a graduate of Rhodes College and received his master’s degree from University of Memphis.  He worked from 2006-2010 for Memphis and Shelby County Division of Planning and Development.  I knew him best as a member of the small team of smart people who developed Sustainable Shelby, our community’s first sustainability plan with 151 strategies to guide the region’s sustainability.  After moving to Chattanooga to work for city government there, he worked for 15 years and rose to become director of performance management and open data and administrator of the Department of Innovation Delivery Performance.  In July, he founded Change Works Studio to provide “executive-level leadership with the agility, data, and systems-thinking required to solve the most persistent challenges in the public sector.”  He describes himself as public servant, change agent, data nerd, and systems thinker.  He is one of the smartest people I know.

To prove the point, the following is his February 13 post about implementing AI in local government:

By Tim Moreland

7 Critical Lessons for Responsible AI Adoption in Local Government

Lesson 1: Start by understanding the lay of the land

I’ve observed that while local governments share many of the same problems and structures, solutions cannot be copy-pasted from one to another. This is because governments are complex organisms and there are too many confounding actors and factors to make replication a simple affair. I always seek to understand the current context when starting on new projects.

In this case, I would like to know:

  • Who are the cheerleaders for AI use?
  • Who is open to it but concerned?
  • Who might block progress outright?

Knowing this information will help you understand what some call the “opportunity space” for your AI implementation program. Your specific context will differ and determine your approach. I was working in a midsized city in the South, your mileage may vary.

In my case I had a mayor who was all-in with the use of AI and CIOs who were concerned about the potential risks and blowback from the community. The IT team’s response to AI was to block anything AI-related on the city’s networks. I knew that these tools would still be used and that a middle ground was needed. As the Administrator over innovation, I worked with the CIO at that time and the mayor to come up with a phased approach that had a clear line to broader adoption, but did so in a responsive way.

Lesson 2: Use a phased approach: crawl, walk, and run

One great way to de-risk something is to shrink the change. In our case, we decided on a crawl, walk, and run approach to responsible citywide AI adoption. This would allow us to systematically build the AI program while steadily progressing towards the broader goal of citywide adoption.

Phase 1: Crawling

In this phase, we brought together a representative and diverse group to test AI and learn together. We called it “AI for Good.” In this group we gave broad access to AI tools and met regularly to discuss what we were learning. This allowed us to quickly find places where we needed more support (in our case around training to get the most out of AI models) and what would be needed to move to the next phase. We developed an onboarding process using our LMS for people to go through to learn about our generative AI policy and guardrails for use before being granted access to AI tools.

Phase 2: Walking

In phase 2 we shifted to developing our overall policy framework to fit with our existing organization and structures. We also rolled out our onboarding process for staff interested in using AI in their day-to-day work. Part of the onboarding was a survey asking how they planned to use AI. This was a great way to identify potential use cases, themes of use, and how they changed over time. Speaking of use cases, we worked to identify the most common use cases and several high-impact applications as well. In our case these were around building a custom AI to assist with understanding the city’s code of ordinances and a machine vision project to identify potholes.

Phase 3: Running

For phase 3 we shifted from learning and experimenting to thinking about scaling use. This included standardizing the tools used, in our case, selecting Google Gemini, since we were already a Google shop. It was during this time that we worked to formalize policy adoption and implementation. I put together a rollout and communication plan to ensure we would be ready. We worked with our partners at Google to develop training for the rollout and had our AI for Good members act as departmental resources for staff interested in using AI.

All in all, it took about a year to go through the phased approach. I left the city at the end of phase 3 and would be very curious to know where things are eight months later. If any of my former colleagues want to give me the update, I would love to hear how it is going.

Lesson 4: You’re not alone

If you find yourself struggling with something, chances are you are not alone. This is definitely the case with local governments working to figure out how to responsibly fit AI into their operations. While I was leading Chattanooga’s AI implementation, I leaned heavily on my peers in other governments for guidance, advice and emotional support.

The good news is that the ecosystem has matured significantly in the past year. One resource I found really helpful was the GovAI Coalition. What I love about the coalition is that they are made of government practitioners, for government practitioners. They have a ton of resources to help you get started so you aren’t reinventing the wheel. As I said before, local context matters, so you will have to adopt the resources to make them work in your city. Since the resources are well thought out and designed for adoption, it is very easy to do so.

Lesson 5: Find your early adopters in your organization

With any change initiative, it is important to identify your early adopters. One of the easiest ways is to let people know what you are doing, and the early adopters will find you. Once word got out about the AI for Good group at the city, the early adopters started pouring in. We got the word out about the group through the city’s newsletter and encouraged AI for Good members to share with others about the group. It is amazing how fast word of mouth can travel.

Anyone who knows me knows I am a data nerd. So, I worked with IT to find out who was using AI tools via the city’s single sign-on (i.e., their city Google account). This allowed me to see who is already accessing AI tools regularly and invite them to go through the official onboarding process during the walk phase. Here, usage data was very helpful in finding our early adopters.

Lesson 6: Listen to and support your team

We approached this process as an experiment, designed to help us learn as much as possible, as quickly as possible. It started with what you might call informal focus groups through the AI for Good group’s regular meetings. We then shifted to using surveys to understand who was using AI, how they were using AI, what concerns they had, and how we could best support them.

This data was so useful to help us develop the city’s AI program throughout all phases. We found that there was a normal adoption process, where people started with simpler use cases (drafting emails, for example) but as they became more comfortable with the AI tools, most everyone began to find more complex use cases. In addition to the complexity of use, the amount of use almost always went up. We also found that people needed two things to be comfortable using AI: 1) clear guardrails for use, and 2) training on how to best use AI. With those two elements in place, adoption would accelerate.

Lesson 7: It’s not over… when it’s over. Sorry 🙁

I wish I could tell you that once you have done all this you can hang up the “Mission Accomplished” banner and move on to the next thing. Unfortunately, you are just at the end of the beginning. If you want to get the most out of your AI program you will need to keep working at it. There are three areas that will more than likely need your continued effort: 1) AI Governance, 2) AI Policy, and 3) AI Adoption.

AI Governance

As you start operationalizing your AI policies, you will likely find the edges of your governance practices. This means you will have to keep on updating your AI governance documents over time and also as your policy changes.

AI Policy

As AI technology continues to develop at a breakneck pace, you will have to update your policies. Luckily there are organizations such as the GovAI Coalition that is constantly updating policy to match the new technology landscape, but AI policy isn’t a one-and-done thing.

AI Adoption

Just because you rolled out your AI program and policies doesn’t mean broad adoption will follow. I personally think a data-informed process should be used to understand who is using AI and who isn’t. This can allow you to target specific groups or individuals to find out what they need to be successful with AI and co-design solutions for them. I would look at this data every six months to see how the AI program is going and what we can learn to make it more effective.

For those of you working in government, what lessons have you learned?

I would love to hear from others. What have you learned when trying to operationalize responsible AI usage in local government? Any lessons I left out that you think would be helpful for others?

Note: You can reach him at https://www.linkedin.com/in/tim-moreland/ or https://www.changeworks.studio/#contact.