Driving Public Health Innovation with AI: Data Modernization Effective Collaboration, and Ethical Policy Development

Resources

In this session, participants explore how artificial intelligence (AI) is being applied to drive innovation in public health through data modernization, effective collaboration, and ethical policy development. Presenters share how AI was used to accelerate a system-level data modernization assessment, highlight a statewide initiative that leveraged collaboration to advance a data revolution, and offer practical tools and lessons for developing ethical AI policies in public health organizations.

Presenter(s):

Download the slides.


Transcript:

This transcript is auto-generated and may contain inaccuracies.

Rachel Brink:
All right, we’re gonna get started. So welcome to the concurrent session, a four or more commonly known as driving public health innovation with AI data modernization, effective collaboration, and ethical policy development. My name is Rachel Brink. I’m the Associate Director for Data Modernization at NNPHI. This session will highlight the use of AI in public health data modernization, systems planning, strategic partnerships, and collaboration, as well as AI policy development. As shown on the agenda, each speaker and presentation will hit on different aspects of using AI. A few housekeeping notes: the documents on the table are for the third presentation. So just for clarity there. And then, we ask that you save all your questions until the end, when we’ll have all three presentations finished and some time for Q&A. So without further ado, welcome up our first presenters, Ben and Susan.

Benjamin Mesnik:
Okay, well, hello everyone. Welcome to this amazing session. We’re super excited to be here today. So, as you’re introduced, my name is Ben Mesnik, and I am joined by Susan Robinson, and so we’ll talk about Arizona’s pathway on data revolution. Here’s us with our pictures. Just in case you don’t see us up here. I serve as the strategic grants administrator, and Susan is our Assistant Director of Strategy and Innovation. Yeah, her picture has changed. So a little bit of what we will talk about here in our session is articulating Arizona’s data modernization framework and priorities, formulating strategies around cross-sector collaboration and strategic partnerships, evaluating approaches to data governance, privacy, and security, and then identifying lessons for workforce development and our partnerships.

So all that being said, in summary, we’ll have a great conversation about how to be partners and some cool data modernization things that we’ve learned. And getting to know Arizona. So, Arizona is very unique. It is also very hot right now, but a dry heat, so we’re adjusting to the humidity a little bit. I feel like if you’re from Arizona, you have to talk about the heat to be from Arizona. So we are talking about how it’s different here, yeah. But a unique fact about the Arizona Department of Health Services is that we are a decentralized state agency, with 15 counties and 22 tribal nations, all of which operate independently. And so it’s a really unique structure for us to navigate a centralized approach to advancing Arizona initiatives as a whole, while continuing the individual profiles and partnerships of each organization as they stand.

Our team at ADHS is about 1600 employees. Our mission is to promote, protect, and improve the health and wellness of individuals and communities in Arizona. We have a vision of health and wellness for all Arizonans, and, as I mentioned, we serve all populations in Arizona, including the 15 counties and 22 tribal nations. Tribal nations, which this state is about 113,000 square miles. So we have some urban, some rural, everything you could think of. It’s a long way to drive across it sometimes, but it’s a really great state if you haven’t visited. Our values talk about how we hold ourselves with integrity, collaboration, accountability, and equity, focus, excellence, and dedication.

So strategic partnerships. These are not just handshake deals. These are truly transformative alliances. These are not just one-and-done. These are evolving, continuous, and growing relationships that we have with one another. It’s about shifting your mindset from what can I get to what can we create together? And it’s really important, because this can vary from internal partnerships within your agency, across teams, all the way to cross-sector relationships, from government agencies to private organizations to nonprofit, and all in between, making an even tri-sector relationship. And this approach that we’ve created and outlined for this framework really built a synergistic approach that is greater than the sum of its parts. So we’ll talk a little bit more about how each of these provides some really great details.

So we’ll go through the why of aligning purpose and values, the what of creating unprecedented value. Who are the architects, contributors, and audience, and how of building trust and resilience through implementation. And the last two have question marks. It’s not a typo. Those are just emphasizing that these are very important questions to ask about who needs to be involved and how we’re going to get this done. So with the partnership details, this goes into a little bit more of an expansive lens on how we identify and move through creating strategic partnerships together through our lens here with strategy innovation at ADHS. So why align our purpose and values, find that common purpose, seek partners with compatible values, and build a foundation on a shared purpose, not just a shared profit or shared output?

The next one, I went back a slide. I know I was making sure we were going in the correct order, which is creating unprecedented value. So, focusing on the output solution or creation, you innovate together, and this does have a unique output, in terms of you can both be creating towards individual things, as long as they come together and collaborate to create a joint solution, partnering to fill your gaps, not mirroring your strengths only, and creating the unprecedented value that we’ve noted. Who are the architects of the Alliance? Now there are a lot of people we can easily say, “Oh, yeah, loop this person in; this person would be great for that.” Let’s include all these people. Great.

We’re getting people in the room, but are we intentionally understanding what each of those individuals is bringing to the table, both with their experiences, their understanding, and who they can also use as additional resources? It’s also kind of like everybody is one person removed from somebody else. That is kind of how we look at the partnerships of everyone knows somebody, everyone knows someone in a different field, and having the ability to allow those individuals to bring additional people, empowering the teams to build the day-to-day relationships. It’s so important to build those relationships on a large scale, not just from this project, but how can I be a partner for you in the future as well? How can we continuously evolve our partnership, selecting partners who offer a range of experiences and voices to enrich the collaboration? We don’t know the best practices all the time. We need to ask for help, which is a hard thing to do, but it often gives some of the strongest results.

How to build trust and resilience, this is through open communication, having to ensure that the partnership benefits your company, customers, and the community. Often, these individuals are the public, which makes it a really exciting opportunity to give back to the community, help advance it, and support its growth. We’re starting small, building momentum, and adapting, and the journey is as important as the destination. So this is how we kind of create our framework here, within the area of strategy innovation, on how we navigate and develop our partnerships. With that being said, I’ll pass it over to Susan. Thank you, Ben.

Susan Robinson:
All right, so you’ll notice we have not talked about AI yet, and that’s okay. It will be coming these stages, though, of setting the relationships, being intentional about the time, the space that we are with folks, AI is a great example of either will work because you have built those relationships and you have figured out how to work together, or it will fall apart and just be a really cool, shiny toy, which we have lots of those in the ether right now, right? So why data modernization? Why do we care about this in AI, probably not the room that I have to, like, really dig into why we’re all here and doing this. But we all know this. We went through a real-world example of where our systems and our data were not where we needed them to be. And we didn’t have a workforce trained enough to adapt and get moving on what we needed. So we need to move forward from that space, right? And we need to build both structures and, as far as infrastructure, as well as our people, to be able to do that.

AI, I think, has a lot of opportunities. We talked about it a lot yesterday in the DMI pre-conference, really understanding where we use it and where we can’t use it. But there are a lot of spaces where we can use it again; none of that matters unless you have the relationships and the trust you’ve built with your partners. Because there is not one agency here that can do this by itself. There’s not one agency that’s going to be able to totally take on AI as a whole and implement it. We’re all going to have to work together, whether it’s with your universities, your locals, or even, you know, the federal partners. So for us, it was really, really critical to bring the right people to the table.

So, you all recipients of PHIG, you know this is in your requirements, right? Have a DMI Advisory Committee of some nature, and so when we looked at this, I really wanted to make sure that we, yes, included internal folks, because you all know that those humans impact and matter on what we can do, but it was also really important for me to ensure that I had really strong voices outside of my agency who were going to come to the table and really help me figure out how we co create solutions. AI included that we’re really going to bring public health forward together. And so we have 3030, plus individuals with about 13 organizations. So I’ve got several of my local health I only have 15 counties. So I know that for many of you, that seems like crazy pants, but I also didn’t want all 15 of them. Love them deeply. That’s a lot of voices in the room.

So we worked together to kind of decide who wanted to be at the table, both large and small counties. I’ve got some tribal representation. You heard. I have 22 tribes here in Arizona, such a critical voice to have at the table, one that I cannot overlook. We have all three of our public universities sitting at the table with us, and I always force them to work together, which I’m not sure that they love me for, but that’s how my money works. So they get to work together. And we also have a lot of associations. We have hospital associations and some other groups that are in the health ecosystem coming to the table to really be able to offer up a lot of the insights about what’s happening in their space and field, to make sure that we are being responsive to what is happening through all of this.

So when we looked at what we wanted to move forward with, these are a couple of spaces that we started and continue to ask as we meet and as we have conversations. So the ecosystem is huge, particularly with AI; there’s not going to be one group that has everything or all of the knowledge, but we all are going to bring a piece to the table, right, as local and state public health. We have the data. We are the data stewards. We own the data. Our university partners have been great in terms of understanding what knowledge they have, what tools they have at their disposal, and so they are an interesting partner to bring to the table when it comes to AI, too. It’s what we’re trying to answer our locals, in particular, fabulous partners for this, right? They are boots on the ground. They are the ones that can tell me, I need to solve for this, not just build the next shiny tool and hand it to me and hope that it all works out.

We’re talking about data flow and governance, the ultra sexy part of data, I’m sure you all think, right, governance, fun stuff. This is where AI thinking can really make a huge impact for us here in public health. And something that we’re looking at in Arizona is, how do we streamline and make easier the process of governance? It’s kind of a bear. It’s the least exciting thing, probably ever. Those of you who love it, I’m so sorry. No disrespect. But AI has a real opportunity to come in and really help streamline some of this, because again, it is not the most exciting thing. It’s not the thing that everyone wants to spend all of their time on. So if there’s a tool that I can bring to the table that would be amazing, and it also helps, then people will use it, because governance, without actually using it, is just a lot of words on a piece of paper, unfortunately.

Innovation and technology adaptations this is pretty obvious. I think we’ve heard about it a lot, the need for public health to start integrating with healthcare. What is healthcare? And health IT is doing. These are all spaces, right, where AI is going to start interconnecting us, being able to talk the same language, work together, and really understand how we use those crossed and then the cross collaboration, cross-sector partnerships, again, universities, hospitals, healthcare, IT as a whole. These are all spaces where this type of innovation is moving very quickly. How do we, as public health, find ourselves in the space of participating, frankly, in the conversation with all of them? So these are a smattering of things that we have going on. I like to stay busy, and I like not just be focused on one thing. So this keeps me moving and grooving on a general basis.

Data Lake House. Don’t ask me what it actually means. I’m a technologist on TV, that’s it, but it totally works, and I love it, and there are lots of great reasons for it. Master person index, one of my favorite things that, as public health, we’re really working on, we’re actually using some machine learning with our MPI, which is helping us move from a very algorithmic-related option to much more of a you know, well, machine learning, duh. Ignore me. State health data commons, this is work the universities are really interested in doing so that we can take health data from across different sectors, safely, securely, and appropriately, bring it together, but to be able to use across different questions and areas, really kind of reducing the friction with everyone.

Academic Health Departments. Some of you may be academic health departments, in which case I would love to chat with you about what that looks like and how you’ve worked that out. But this has been part of our goal here, too, so that we can really appropriately leverage our universities. We’ve got, you know, several universities, both in public health and technology, that are really doing some cool things. And so having an actual MOU, an actual agreement, something on paper that says this is what it looks like for us to work together is really important. This idea of a one-stop shop for our data. So I look at our website, and I don’t know how to navigate it, so I certainly don’t expect the general public to. I’m working on this, by the way. So, PHIG, thank you. You are also helping me with this one, but is it a one-stop shop for data, right?

So what is the best way that I can get into the hands of those Arizonans who are making choices about their day-to-day health? This is one of the options that we’ve gone with for the data portal. And there are opportunities, certainly, for AI here. We’ve sort of been very slow and intentional about rolling AI into data that’s going outside, and the last one here. So we are a Google shop. So Google Gemini has been a real gem, I would say, Ben uses it a lot for just taking notes and like putting them together and giving me something really quickly that I can actually use, which I so appreciate, but there’s so many opportunities within public health, particularly operation space, that I, you know, this has been a huge win for us, whether it comes to writing, you know, taking my SAS code and putting it into Python or R or other, you know, things of that nature. So really enjoying Gemini.

I’m not the person to talk to, though, because I don’t actually have the tool. This is the really important piece for me. Here is again, with the really shiny tools, unless you figure out how you’re going to work with the workforce you have and upskill them to the space that they need to be upskilled to know how to use tools like AI, you’ve put a really shiny tool in front of somebody with zero sustainability. So connecting all of the fun little dots that we’ve talked about this week so far, this is just really, really important as we continue forward in this advancement. Luckily, we have our universities at the table. Turns out their main goal is to teach people, which is awesome. So I love that, because I love using people where they’re doing their own jobs, and I can just benefit. So that has been really important. I will say it has been tricky for me to figure out how to say that I need you to teach my EPIs data science. I don’t know that I’ve interpreted that beautifully, so if anybody has figured out that interpretation, just holler. I’d love to steal that from you.

But lessons learned. So again, at the base of this, I mean, I love technology, but the partnerships and the relationships started at the front with a strong purpose of why we’re together, why we’re working together, what we’re trying to achieve. Only way that this all works. It’s the only way that I think we can continue to make things work. And so important right now, as we enter really turbulent times, as we enter spaces and places where we are unsure what is going to look on the other side, these are the partnerships that we have to maintain, that we have to keep building on, to ensure that we can continue forward. We can innovate. We can keep doing these really cool things, even in that turbulent time. So, all about foundation for me, it’s always going to be about foundation for me. So thank you, guys, so much, and I’m going to pass it to Nicole.

Nicole Yerkes:
Everyone hear me? Okay, awesome, great. So we all submitted our abstracts separately from each other, and so we’re trying to, like, kind of tie how we all connect together. And so Arizona was kind of high level what they’re doing with data modernization, how AI could potentially be involved in what they’re doing in the state of Arizona. And so mine’s kind of an in-between of the two, where I’m going to be talking about the strategic partnerships we had in making our data modernization assessment, and then how we used AI to accelerate that and our data modernization assessment tool that we’re giving out to everyone.

Also, Hi, I’m Nicole Yerkes, rhymes with jerks. I’m the data modernization director for the Utah Department of Health and Human Services, and, like I said, I’m going to be talking to you about how we used AI to accelerate our enterprise-level data modernization assessment in Utah. So in Utah, we are a decentralized public health system. We have 13 local health departments and eight federally recognized tribes, and in Utah, we wanted to approach data modernization as a public health system. So we wanted to make sure that all of the voices that are operating in Utah’s public health system had an opportunity to have their voice heard and be included in our assessment, and really understand what the needs are, not just within the Utah Department of Health and Human Services, but our 13 local health departments and our tribal public health agencies.

So we conducted 18 focus groups across that entire public health system. We did it with lenses of your area of focus in public health. So we made sure that we had environmental health experts, chronic disease experts, health equity, and we had people from Medicaid. We had people from our juvenile justice services, within our super agency of Health and Human Services, with our federally recognized tribes. We tried to cast a wide net. And those that are funded through IHS, those that are, I think it’s a 638, where they have their own abilities, and then those that are in an urban center that has a different type of funding. So, really trying to cast that broad net and make sure that those voices were heard, had everyone at the table, and that produced over 60 plus hours of transcripts that we then needed to do a qualitative analysis to pull all that information into a multi-year strategic plan for data modernization in Utah.

So we utilized a Google Gemini pilot that we were able to jump on. And this is our notebook LM page, where we were able to pull in all of the different transcripts. And it likes to give you a nice little emoji on what your topic is. And I thought there was nothing better than that emoji for data modernization, where you’re just spinning. Thought that was hilarious. Thank you. Notebook LM, so our methodology, we utilized our Google Gemini at the time. So we are a Google suite at the Utah Department of Health, and during this time, when we were trying to do a qualitative analysis for our strategic plan, no one was allowed to use AI yet, but there were a few people who were pulled in to pilot some use cases for Gemini specifically. And I was able to tap into an individual who was able to get onto that pilot group and said, Hey, I’ve got a use case for you. Can I use your laptop? Can I use your AI, and he was like, Heck, yeah, let’s do it, which was awesome.

And my former data modernization coordinator, Katie Zimmerman, is amazing. She went through our Coursera licensing to do a Google AI prompting course so that we could utilize notebook LM to the best of our ability. That’s something else that I think has been really cool. Kind of side tangent with our funding for public health infrastructure is our Coursera licenses, and being able to give those out to staff to be able to upskill and take on opportunities like this, where we didn’t know that we would have AI and that we’d have an opportunity to jump in and use this for our DMI strategic plan. So the fact that that was available, that Katie was able to do that course really quickly and then go into this with some education on how to do this appropriately and efficiently, was awesome. So huge kudos to our workforce development team on the Coursera licenses.

So we utilized notebook LM, as I said, we had 60 plus hours of transcripts that we were running through this, and we were able to do a qualitative analysis for all of those 18 focus groups altogether, as well as 18 individual session summaries in an hour and a half. We’re able to crank that out in an afternoon, which was insane. And then we did a comparative analysis where a human in the loop did three of our sessions, utilizing in vivo to do a manual qualitative analysis of just three of them, and that was taking seven hours per session. So our intern spent 21 hours just doing three sessions where we did all 18 and one all together in an hour and a half using notebook LM.

So these were some of the prompts that we utilized to get out what we needed from those session transcripts and those session summaries. I’m not going to bore you by going through it all, but I’ll just tell you it was a learning process. There were some that worked well. There were some that didn’t work so well, some that were a little bit more duplicative when it comes to data modernization, and asking AI to use simple language and to put it into an eighth-grade reading level or something like that. I think there’s a time and a place for it, but you lose a lot of the terminology and really the intent behind some of the words that we use in data modernization. So we found that the eighth grade reading level, utilizing notebook LM wasn’t the most effective, but I think there’s some more tweaking and some playing around with it that we could do to get it to an area where I feel like we’re able to convey our messaging appropriately, while also making it so that those that are not in the data modernization space, those that are not IT people can also have a better understanding of what We’re talking about.

So, future note for how we could utilize this. All right, AI versus Nvivo. I’ve already talked about this. It was an hour and a half total utilizing AI, whereas our NVivo was manual, seven hours per session. AI, I thought, did a really good job of successfully gathering our overall themes and sentiments that we had in the individual sessions, as well as when we compiled them all into one and were able to pull those out pretty nicely. AI did struggle with specifics and pulling out exact quotes. I think part of this was because our transcription quality was pretty low. Also, in the way that we were able to transcribe these sessions. Sometimes we were all in person, so we just had a Google Meet pulled up, and then it would say that it was just me speaking for the entire four hours. So, for AI to parse through the different voices coming through in the room when it was all attached under my name was a little challenging.

Whereas Nvivo, when we did the manual process same thing, we were able to successfully gather those overall themes and sentiments, but we had a better opportunity to be able to pull out specifics and quotes. Going through that as a person, you can, like, really see the different voices that were coming through in those transcripts. So, human in the loop was much better able to interpret those tough transcripts. As an example. Here’s just what NVivo looks like and how you do it. For any of those that have done qualitative analysis, you just read through the transcript, you highlight things, you attach an overall theme to it, and then it pulls it out on the side. I wasn’t the one who did it, so if you have deeper questions about it, I can connect you with who did, but it is a very manual process. It’s very mind-numbing, having to go through these transcripts and then attach these themes to certain words, and then it’ll start to do it automatically, but it is a very labor-intensive process.

Whereas our AI output, as you can see on the left, we just uploaded our transcripts to the left side, and then we had our prompting in the middle that was able to pull out those themes and answer the prompts and questions we had for it. And then also, we’ll get to this a little bit later, but there was some fun functionality with it, where you could have an audio overview of what the session is telling you, and it’ll create a podcast episode for you, which I think is really fun.

So, cost savings. It took us nine months to acquire one NVivo license. It was a nightmare. I hated every process of it, and we were supposed to have our NVivo software by November of last year, we didn’t get it until March, even though we started working on it in July, and so thankfully, we had this opportunity to jump on the pilot with Google Gemini, or else we would not have made our deadline of having our DMI strategic plan ready for our DMI Council by the end of March. AI really saved our butts on that one, and being able to have a finished-quality product to share with those who participated in our focus groups and then with our DMI Council. And now we have a really robust strategic plan that we can use beyond the life of PHIG.

For the NVivo license, it’s $1,500 for a single license. As of right now, Gemini is just a part of the Google Suite. It’s not something that we have to pay extra for, so the cost savings are just face value right there between NVivo and Gemini. Gemini is more cost-effective. The time savings were immense. Just like I said, it was three sessions, 21 hours, and we had 18 total sessions, plus an overall summary to do that would have taken us weeks to be able to accomplish that, whereas AI could do it in an afternoon. And it eliminates the need to purchase further QA tech licenses for other users. So, NVivo is stuck to that one person who was given the license. So if you have someone else that’s needing to do a qualitative analysis, it’s not like you can just transfer that license easily. Whereas AI, there’s a lot more customization and ability for people to utilize that software without having that license transfer issue.

With AI, as I said, we had 18 listening and learning session summaries, and then we had one final scoping project summary that incorporated all of those 18 different lessons and learning sessions. And then we were able to produce three podcast episodes with AI, including one specific to our Utah public health lab and their session. And then we did a couple of play around with some different prompts with the overall session summary in the app in Cvent. I did share this information with you all, if you would like to see it. So we have some of the session summaries. We also have our facilitation guide, our DMI strategic plan, and our DMI roadmap, all of which were created with the help of AI and with Google Gemini, pulling out those analyses and those themes. So if you want to reference those, you have them in the app, and you can also ask me any questions that you have about them later on.

And then finally, the little podcast episode. What I think is really fun about this is that it’s giving information differently. It’s a way that people can listen to what you’re having to say without needing to read something. They could play it in the background. It’s in a format that I feel like we’re all used to now, where people are kind of bantering back and forth, talking about these topics. And so, just going to play you a short clip of it so you can hear it if you haven’t.

Video Speaker #1:
Before diving into a pretty big topic. Yeah, it’s Utah’s data Modernization Initiative. And basically, the state’s trying to, like, you know, revamp its whole public health data system.

Right. Bring it into the, you know, the 21st Century. Yeah, exactly. And we’ve got a lot of information to sift through here. We’ve got summaries of what people are actually saying, you know, stakeholders, those who are working with the data, and then, like, the overview of the initiative itself. So

A lot to dig into. Yeah. So our goal is to figure out what the biggest takeaways from all of this are, like, what are the challenges they’re facing, what are the opportunities? And, you know, really, just figure out why this matters for, like, everyone in Utah.

Video Speaker #2:
Absolutely. And you know. Yeah, I think it’s important to start by saying that, you know, data modernization in public health can sound really technical.

Video Speaker #1:
Oh, yeah, for sure.

Video Speaker #2:
But really, it’s all about giving the state the best possible tools to figure out what’s going on with people’s health and then how to respond to it, you know. So it’s kind of like, you know, if you think about going from like, handwritten notes to a whole, digital system, but like, way bigger.

Video Speaker #1:
Yeah, way more important.

Video Speaker #2:
Way more important.

Nicole Yerkes:
Apparently, you can’t just pause it, just start it over. So we’ll skip on to the next slide. But I thought that was a really fun way to demonstrate. You know, the work that we’re doing in a unique format where they’re talking, they’re trying to bring the language down of data modernization, and I think it really helps you in your different audiences that you’re talking to. So, in conclusion, Google Gemini was awesome. It helped us get to where we needed to in the timeline. Those 60-plus hours of transcripts were a lot, and it really helped us synthesize it really fast, pull out those common themes. We shared it with all of our focus group participants and asked, is this actively reflecting your voice? Do you feel like what you wanted to say in your space of environmental health data modernization is reflected here, and we got a lot of positive feedback there, and I’m very proud of the product that we have?

And I think that AI was really huge in being able to help us get to that point. So with that, I just want to acknowledge the team that worked on this; they were amazing. Jeff was an amazing person who was in the pilot that let us use his computer for an afternoon and play around with notebook LM. Chloe, our amazing intern, is the one who spent 21 hours doing the manual qualitative analysis of a few of our sessions to do that comparison. And then Katie was our DMI coordinator, who is now the Epitrax operations manager, and she did a really wonderful job learning AI prompting and how to utilize the system. Thanks, guys. Thank you.

Tatiana Lin:
Good afternoon. Very excited to have this in person, an audience, and an online audience as well. My name is Tatiana Lin, and I’m the Director of Business Strategy Innovation. I’m going to be tag-teaming with my colleague, Shelby Rowell. If you’re online, we also uploaded a couple of resources we distributed in person to the app, so feel free to access them. In case you’re wondering what you have on your tables, you have a resource guide, which is a compilation of some of the resources we have been putting together, which is a list of webinars, as well as blogs, and other information toolkits related to the topic that we produce with partners. And we also have a note taker. It’s just at your pleasure; if you like, take notes. It will follow our presentation by our section. So use it if it’s helpful.

So our conversation today, where we’re going to take it, is to talk a little bit more about what we’ve been doing in a lot of conversations with the PHIG grantees, and what we have heard from the field, as well as we’re going to talk about AI policy development, the AI governance framework, which could be fun. So we want to recognize our partners. We are extending our team with Emma Urich and Aaron Davis. We work with partners at each State University. They’ve been instrumental for our public health infrastructure grant region seven, and of course, our partners from Region One, Ben Wood right there, and his team, and Eric Gayton, have been instrumental in helping us to pull these resources together. And I think it’s really value of the PHIG partnership is learning from each other, partnering with each other, and sharing their resources. So appreciate that a lot. And of course, our great presenters are providing some specific use cases.

So the resource we’re going to be sharing today with you that builds on our work here is called AI guidance and toolkit. Why we did this resource specifically? It’s been a couple of years, but we pulled it together about a year ago. We noticed there are some different templates for AI policy development, but this template was not specific to public health, and also, as we were trying to develop our own policy organization, it’s so hard to write like legal language for AI, especially if you don’t really understand fully what AI is. So we embarked on the journey to sift through a lot of state-level policies in the literature review and produce a resource with sample provisions typically used in AI policy. And that was the impetus for that. So we appreciate PHIG’s support of this work.

To put it in the context of who we are, we are a Public Health Institute, but we do work a lot with government agencies at the county and state level, and it’s been a pleasure doing that work. Our goal is very ambitious: to make Kansas the healthiest state in the country. Sorry to all of you who are probably pursuing the same vision; we are non-partisan and a non-profit, so we’re really there for the public good. And so one thing we wanted to start, which is kind of a new part of our presentation, is just reflections from the field. In case you’re wondering, where everybody else is in the AI space. And of course, it can be fully generalizable, because this just comes from some of our experiences doing the workshops. And then we’re going to dive a little bit more into a policy. So this is what I mentioned: those resources. Is just a snapshot of those.

So that reflects the field where they come from. This is the map that we started building over time. It just shows where we’ve been, and we’ve been in the last 2025, have done 27 workshops. A lot of them are PHIG-related. And you can see here the national, which includes the whole spectrum of national workshops, and also state-specific. I do want to show you a map for a couple of reasons, not just because we are very excited that we do know this, but also because there’s so much need. And so, if you’re in your state and you have this capacity to support others, I think there’s a real need to help build this capacity and literacy in AI. So, for example, some of the work we’ve covered has reached about 5,000 participants just in the last couple of years.

So what are you hearing? We typically do some polls and also some surveys, and these are some of the key takeaways when we ask folks about how comfortable they are with using AI tools. And honestly, right now, we’re all talking about small AI. In my head, it’s like a small AI, because all these publicly accessible tools are still small AI. Hopefully, we get to the bigger AI, such as predictive analytics tools, at some point as well. But right now, even with those tools, we’ve seen that there is little cures or somewhat comfortable, but still learning, and there’s only if you respond to the typical questions in those workshops, raise hands and say, We’re completely comfortable with using it.

What do you use AI for? We are still using it more for brainstorming, summarizing, and grant writing, so we there is still a lot of potential for using it for bigger tasks. Because when we ask the question, what challenges do you think can be solved, folks get more into this big data analysis piece. They want to see more of debunking the codes, data visualization, analytics, data mining, you know, predictive analytics. So there’s definitely more ambition there, and more opportunities for giant chips. There is more interest in that. Translation comes up quite a bit when we talk about how we really want to have that translating tools. And actually, there is a good tool. We’re not endorsing the vendor, but HOPWA health departments are using a product called Pocket Talk. It’s a HIPAA-compliant tool for translation, and we know the Niagara County Health Department in New York has been using that. I don’t know if there’s anybody from that department here in Johnson County, Kansas, have piloted the tool as well for those purposes.

When we ask the organizations where they are in developing their AI policy, we typically hear that they haven’t started yet or just begun drafting it. And I think there’s always this tension about what comes first, literacy or a policy. So some folks feel that we need to build literacy before we can jump into the policy kind of chicken-and-egg game, and also figure out what’s happened at the state level. By the way, NC cell is a great resource, the National Council of State Legislatures, for tracking state-level policies and pulling their resources. So if you’re wondering where your state legislation is, I would really recommend checking that website. Even in the last session, there were 1000 policies and legislations across the country that were introduced.

But what other interesting takeaway is when some organization staff don’t know the policy exists. So if you’re in a leadership role or you know if your organization has some guidelines or an AI policy, make sure that the staff actually knows what the policy is and how to implement it. Some policies are just written in the very high level, so it needs to be demystified in the next steps. A lot of folks want to get this AI group. And we heard about maybe using the data modernization work group, or the existing data modernization group, to repurpose it, or add this AI work to it, which will be essential as well, as well as building internal skills.

So in terms of the human themes, we’ve been receiving a lot of questions, and we kind of put them in those buckets about the type of questions we’re receiving. Some of them are about AI basics and capacity building. Usually, we get the questions about what is a one-stop shop for AI, beginners. Can we, like, just get in and get this information easily? So if you’re from a national organization, there might be an opportunity there. What training should we take? There are so many trainings being offered in courses. Which one shouldn’t lose? How would it help us? Should we focus again on capacity, Bill, and policy development, on the practical applications? There is a lot of appetite to know what those use cases are, and we already heard some great examples of using that for thematic analysis.

What are the other use cases? As we know, the Chicago Public Health Department used it for inspections, for food inspections. And they’d be able to use historical data to really identify where some particular areas for food inspection should be. So there are some of those case studies that exist. We just really don’t have, I guess, a database that can, you know, show us all those use cases. So there’s a lot of appetite for that. On privacy and legal and regulatory compliance, we get a lot of questions about whether we should only use tools as HIPAA compliant? What about AI outputs? Should they be ADA accessible? Do we need to document our AI use for auditing purposes? And so these are some of the questions that I know, you know, our peers are trying to answer.

In trust and public perception, there seems to be increasing concern about vendors using AI, and should that be disclosed by the vendor? Should vendors be allowed to use AI? Should that be in the policy work, as well as if our department is using them, then should we disclose that on every publication? Should we disclose it on our website? What’s the proper citation of AI? So some of the questions that individuals are wrestling with this trust and public perception and environmental impacts been coming up more and more recently in the conversations.

I think public health is really struggling with the fact that AI is using a lot of resources to cool down the systems that the fact that, you know, data centers do get placed in communities that are already vulnerable, and how would that really have a long-term impact? And by the way, Erica Gaten at Health Resources in Action just had a wonderful piece on LinkedIn about some of those tensions and the implications of AI for environmental health. And so I know we’re struggling with that, but the question is, what is the role of public health in that, and what can we really do in terms of the regulations?

So now that COVID provides you with a little bit of an overview of what we have been hearing, now diving into the fun part of the AI governance framework. And of course, the framework is much larger than just AI policy, but some of the things that, and again, this is the part of the toolkit that it’s really important for the policy development to lay the groundwork, and don’t skip that process. And again, the steps are really not linear, but some of them are. Could come at the same time, but establishing this AI work group, which can again be, if you already have a data modernization work group, can be part of that, then determining the rationale for using AI is very important.

And actually, the toolkit I mentioned includes the whole exercise you can use with your colleagues to go through it, because we don’t want to use AI for the sake of using AI. We want to really understand what problems we’re trying to solve, and this AI is the best solution, and what type of AI we’re talking about. And so that’s really important to determine this rationale. AI literacy level, it’s really important to understand where your organization is at. There are some good survey tools up there that can help to identify that, and like figuring out who the experts are and how you can lean into them. And you have to kind of, even if you don’t have a policy, if your organization allows for that, it will be really great to explore Lori’s cases.

I mean, it’s up to your organization to decide what Lori’s cases are, but usually, there are those who don’t have immediate impacts on the clients or carry legal implications. But that would really help to understand what AI tools exist and how you can use them, start tracking, use and recording lessons learned. I would really want to flag. There’s a great resource for the San Jose AI task force at the city level. They develop wonderful resources and wonderful toolkits, and they track the use at the city level. And then lessons learned. It can really help to understand how to pivot. And then, once you go over those steps, expand the work group, and start policy development. The important part in there is developing. You know, with all the policies, you need a purpose statement, of course, because you need to understand what AI is trying to govern. Are you governing generative AI? Are you governing all kinds of AI?

So what the scope is going to imply to all the vendors, it’s going to be implied to all employees are really figuring that part out. Guiding Principles are really important. Guiding Principles are the values that you want to make sure that this policy is guiding. So, for example, if you’re concerned about workforce sustainability, because we know where I can replace the workforce, you want to make sure it’s arguments the workforce and supports it. So your AI principle can be that we’re going to make sure that the way we embed AI isn’t going to replace the workforce. It’s going to be supportive of your workforce. So that’s an important step in AI policy development, yeah, policy scope again, making sure that you’re clear who this policy applies to and who it doesn’t apply to, and what it governs.

Policy provisions. The good news is that there are a lot of buckets of policy provisions that have already been developed, like data, privacy, bias mitigation, transparency, human oversight, and so on, that can start with your blueprint, and Shelby will show a couple of examples. The industry established a mechanism for policy implementation because there are all the policies, and we saw that hesitation in doing the policy that is just going to be like another shelf document, and so vague that we can’t really implement it. So it needs to have those trainings as tangible examples that it can be really implemented on the ground, and establishing process for monitoring, evaluation, and updates.

Before diving into AI, I think it would be great to establish a baseline for what you’re trying to improve, and actually track if the implementation of AI had any impact on that improvement. So, are you really reaching your goal? We heard some stats today about, you know, using the thematic analysis saved us time, and that can be quantified. But also, you know, accuracy, measuring the accuracy, you know as well, and so that can be brought up to make those decisions. Are we using AI? Another thing that comes up in the discussions about loss of competencies. Should anybody be allowed to do this thematic analysis, or should there be folks who already know how the methodology works? So we’re making sure the public health workforce does not lose those core competencies going in, so that can help with some of that as well.

So, the toolkit that we have, I just want to show the roadmap of that. It has this introductory section that, like dovetails what I was describing. They have a topic-specific section for each of these, where it has specific provisions that you can look at and adapt for your organization. And it also provides a hypothetical vignette that illustrates some of the kinds of scenarios we try to do AI for these purposes, what we pivoted, and what the results are. And so they’re just kind of the roadmap of the toolkit. And to wrap up, before I transition to Shelby, if you develop an organizational policy, there are a lot of considerations, and this is just some of them we actually gathered, also from talking to folks in the field.

And the one that I want to highlight is tool-agnostic language, and we have seen a lot of policies. There are sometimes tools that are called out. It’s not an evidence-based practice not to call them out, but it seems to be going in that direction, that tools are using the different platforms, and in some cases, you might not know a cube is banging a particular tool, but you don’t know the tool changed, and it’s actually using the same platform as another tool. So, unless you have a very specific reason to call out the tool in the policy, you might make sure that your policy is too agnostic. Also, you can imagine that if you add new tools to your organizational use, you’ll have to update the policy each time. So maybe having, like, an appendix or something that references the tools, but not being tools called out in the policy.

Another one is the visibility of the provisions. One particular provision that we have seen is explainability, and it’s a great and noble cause to make sure that we use AI systems that are explainable. Unfortunately, the more complex, large LLM systems are not explainable. Even those who develop them do not fully know how the algorithms work behind them.

And so if you put a new policy only going to use explainable systems, you’re basically going to be able not to be able to use a lot of tools. So, really figure out what matters if you’re using tools to determine eligibility. Yes, you want to make sure that you know how the decision has been made by AI. But in other cases, explainability might not be a practical provision, and they’re just something to think through as you work on the policy. And so now we’re going to move to the third element of provisions to consider. And Shelby, going to show you a couple of topics, how to think about including the provisions in your policy.

Shelby Rowell:
So thank you so much, Tatiana. As Tatiana mentioned, my name is Shelby Rowell. I’m an analyst here at KHI, and I’m thrilled to be with you here today. So as we’ve been speaking with different health departments all across the country and really working with them to help develop policies and doing some of our different trainings, we’ve kind of separated out some of the things that a lot of health departments want to know. So first, one of the first things that we hear is we want to know how to develop, draft, and refine those AI policies that align with our core mission and values. We also want to know how we use AI in a way that is really building upon our values of public trust and really aligning with our ethics as a public health organization, so both within that AI policy development and also within using AI.

And then, finally, a lot of the folks we work with are staff across the nation, just like you, so they may not be in a leadership position within their organization, such as a secretary or an assistant secretary. So, really want to know how to bring the health department leadership along with them on their journey so that they can develop and roll out an AI policy smoothly, effectively, and efficiently. So, as Tatiana mentioned within our AI policy template and guidance document, we look at six different main topics. So today we’re going to focus on human oversight and bias mitigation.

But if you’d like to learn more about these other ones, you can check out our guidance document. We also look at those other sections. So these sections, they’re ones that are still very important to consider as you’re developing your AI policies. But within our literature review and policy analysis, these are some of those topic areas that just didn’t have as much evidence and robust information as the six topics we’ve identified in our main sections. So, looking at human oversight when we’re thinking about an ethics-based governance model for AI, one of the things that we can do to support that is develop an AI policy that keeps humans in the loop, and ensures that at different decision points, we’re making sure that humans are not just stepping away from that work. So these are some of the different provisions that can help us support that mission.

So, first, looking at oversight processes and protocols. So this is looking at that risk-based task. So when we’re thinking about what we’re using AI for, if I’m writing an email and I want to make sure that I have a clear tone and make sure that things are understandable, if I’m putting that into AI and asking AI to maybe help me clean up that email draft, that may not be something that needs to go through a formal review process, that may be something that I just need to go through, read, make sure that everything looks good before I send out that email. As we heard earlier from Nicole, if there are other areas of AI use and use cases that are maybe a little bit higher risk, like doing that qualitative analysis and putting all of that information into a system, then that’s when you may want to have a more defined, stronger AI Human oversight provision. So we’re not provisioned a huge human oversight process.

So, making sure that, as we’re using that AI, the information is accurate, we’re making sure that it is also de-identified, so also looking at human operator responsibilities and interventions. So we saw this a lot in AI policies, if your agency is involved in services or providing benefits, making sure that when you’re using AI, if you’re using AI, that it’s not completely automated. We also want to look at that and then make sure that we’re checking the AI content that’s been generated. So, within bias mitigation, looking at the different types of bias, you can really divide them into two different buckets. One focuses on the bias that is inherent within the system itself, which has been put in by that developer, and then also looks at the user end. So, thinking about what biases we’re putting in when we’re writing our AI prompts?

So, as we’re going through, we’re checking our own biases and understanding what biases may be within that system, which we really want to consider and incorporate into our AI policy. And with that, we look at some of our sample provisions that can help us do this and really drive forward that ethics based AI governance framework, so making sure that when we’re creating prompts or fancy term prompt engineering, making sure that we’re reviewing and finalizing prompts before use this can also help with our environmental impact, making sure that we’re creating guidance for users to reflect on any personal assumptions they may have, ensuring that vocabulary and tone is consistent with our values and everything that we put out into the public, and then also making sure that we’re being transparent in our AI use. And this can look different based upon if we’re putting that on our websites or on specific projects or products that can look different.

So, some key takeaways for AI use and policy and public health, these are just some of the things that throughout this process we’ve really seen. So, we know that public health departments need clear, practical use cases to support that real-world implementation. We want to see those real-world examples of state and local AI policies, which are evolving and becoming more prevalent every day, and also making sure that we have clear strategies for mitigating ethical concerns while exploring those AI opportunities.

So to close, this is just a list of some of the different technical assistance offerings that we’ve been doing with public health departments around the country. If you’re interested, we encourage you to submit that TA request or email Tatiana, me, or Erin to get that conversation started. So thank you so much for having us today.

Rachel Brink:
All right, we are on to questions from anyone in the room or online. I think there’s, yeah, a mic runner.

Audience Member #1:
I just have a quick, easy one. I looked at the app, and the presentations are not available yet. Will they be made available in the app?

Shelby Rowell:
Yes, we can upload it.

Audience Member #2:
When you were running the 60-plus hours, did you do all the 60 hours at once? Or did you divide them up by the 18 sessions? And if you did, did you notice any difference breaking it up into the 18 sessions? Or did you just run them all together?

Nicole Yerkes:
Great question. So what we did first was individual session summaries, and so we started with each individual session and went through those, and then uploaded them all at the same time and had them pull out elements of the end of the sessions into one project summary. So we kind of, I guess, primed the notebook LM by having it do the individual sessions first and start to get a sense of what the content was, the same questions that we were asking in each group. So it could start to get a sense of what we were accomplishing with it, and then we put in all 18 to pull out those themes and other high-level concepts that do the SWOT analysis and things like that.

Matthew Buck (Audience Member):
Thank you. I’m Matthew Buck from the state of Michigan. Sorry, falling over here. I really appreciate the concrete stories that sort of demonstrate the value proposition, the number of hours saved. I think that really helps me go back and advocate for why this is important, beyond just let’s not get left behind. And I also really appreciate the strong framework that’s structured and gives me a rational basis that I can sort of take and run with.

What I still struggle with being at a state institution, as opposed to some of my local health counterparts, who are able to sort of generate these policies and implement them within their own bodies, is that I have a centralized IT department that has already established its own AI policies, which basically amount to don’t do it and if you need to talk to us. And so there’s this amorphous AI team that makes the decision in a black box, and I have limited ability to steer that, and I also lack a shared risk model, where they aren’t as invested in me moving this forward as I am in moving this forward. So I can only do so much with frameworks on my own. I’m wondering, if you and I know, you have your own experiences in large government institutions, how did you navigate some of that? Do you have any recommendations around what we can do for those things that are really beyond our direct or indirect control?

Nicole Yerkes:
Okay, so in Utah, I think something that’s been really helpful is that the person who is kind of part of that nebulous AI policy group that you were referencing is one of our DMI champions, and he’s part of our DMI core team. He’s on our DMI Council at the executive tier level and at the leadership tier level, and so I think being able to have his ear as we’re working through these things and explaining the importance of it has been really helpful. In Utah, we are still very risk-averse. We essentially have the same policy, which is, you can use it as long as it’s approved, but nothing is approved, so therefore, you can’t use it. But I think that having him and being able to have those conversations and have those touch points on a regular basis is really helpful in getting that buy-in and having an advocate to push these things forward.

Susan Robinson:
I will say that in Arizona, I am in a different spot. Mine is open for use, either the things that are open, and here’s the process by which you can do it. So I own up front that I’m in a different space, but one area that I have worked, even in things, not just AI, but those out of the box that are just a little bit to the left or the right of everyone, is my university partners. They tend to have a little less risk aversion than their academic institutions, and have a little bit more ability to kind of try and fail model works a little more joyously in academics than it does in government. People don’t really have a lot of tolerance for, like, love that you tried a cool thing and it didn’t work, right? Like, no one’s, no one’s told me that in my job recently.

So, I think academic partners are interesting. You bring the data, you bring the public health ideas, and they bring a lot of times, sort of the capacity and potentially even some funding opportunities. So again, in terms of being able to try and fail, and it’s not a cheap endeavor with AI, but there is a possibility to work with some institutions, like your academic partners, and say, like, Hey, I just want to pilot this. Like, could you guys kind of hop on board with me, and can we figure out, you know, how to make this happen? And again, a lot of times, state agencies tend to like that look, if you will. So also potentially helpful.

Audience Member #4:
Hi. This is pretty from Boston, so this is the question for Utah: when you implemented the Google Gemini in your infrastructure, how did you handle the security part from there?

Nicole Yerkes:
That’s a great question. And I think to get really into the weeds, I’ll have to direct you to my aforementioned DMI champion who has been working on this policy. And I think also an important note is that we were part of the pilot for it in utilizing these use cases, and since then, they have rolled out Google Gemini across our entire agency in the Division of Population Health. We were the last to receive access to Gemini, because we have the highest amount of protected information in our division, and one of the things that they were seeing with this pilot was that it was starting to pull information from Google Drive when we didn’t give it permission to, and so I’m very thankful for the group that was working on this. I was able to identify that security concern and then be able to put in parameters to make sure that it’s not doing that, and that we have control over it, and that there’s no PII or PHI that’s able to get into our AI system without explicit knowledge from our part.

And so I think our DMI champion, who was overseeing the policy group as well as DTS, did a really good job of having those parameters around it, but I will say that the access that we have to AI right now is extremely limited. It’s only within our Gmail right now, and so it can help you with email writing, summarizing your emails, and summarizing your chats. And so being able to do the thematic analysis that we did in the pilot. We don’t have access to that just yet, but I think it was a really good use case to demonstrate where we could move to in the future, while also ensuring that we have privacy and security around the way that we use it.

Audience Member #5:
We have a question from the online audience: how have you managed the ethics and optics around the environmental impact of AI use, especially in places like Arizona, where concerns with water and the amount of water use are immense? We’ve had these concerns of resistance when advocating for use.

Susan Robinson:
This where I regret that we talked about the heat in Arizona pretty quickly take, well, to work on that and maybe not bring so you know, it’s gonna be like a weird, like thought process for you and what goes through my brain, but yes, and is really what it comes down to for me, there is absolutely an impact. Is the impact of me not doing my job better, faster, stronger, and more impactful than the environmental one? I’m not sure. I don’t know the answer to that. I do know the answer that I think public health absolutely has to use all of the tools potentially made available, and in the turbulent, somewhat times that we are going to find ourselves, there is less and fewer resources, both in human capital and monetary, and so hard choices have to be made, period.

I wish I could say that I’m like the super expert of Arizona and water, and to be honest, I am not so. I will not comment on that, because I will look like a four. Cool, but is it a concern? Yes, I think there’s a risk that you turn any corner in the AI conversation, and you smash into a risk. And so it is really around a risk balance. It all has to be part of the conversation, and deciding which risk wins out at the end. I don’t know that I know the answer to that. I don’t know that I know where exactly that should take place. Gosh, and I wish there was like, a more adult, the adult around, right? Like, I don’t know who that goes to in the global world of public health and AI, and whether we use too much water or not.

But to me, it’s a risk question of which one’s riskier, and looking through all of the things, is it riskier for me to not give, you know, my team a tool to make a difference in someone’s, you know, health, or is it riskier for the water? Gosh, I don’t know. I don’t know, but we could just sit around with like, a beer later and just, like, chat about it too. So like, if y’all are down for it, I’ll just keep, like, thinking the deep thoughts, but, yeah, I don’t know. Do you have thoughts?

Tatiana Lin:
One, yeah, also thoughts, yeah, no. Well, one thing we can add, what we have seen in the state-level policies or direction, is for public health to see there are opportunities to inform, engage with the private sector when they are trying to decide where to put the data centers, at least try to get into that conversation and do like this impact assessments or Health Impact Assessments. Of course, that will require building those collaborations and their willingness to consider that, similar to environmental impact assessments, and provide this data to make sure you’re mitigating those risks. Another one on the individual level, there are some things we have seen in the policies of how to even use those generative AI systems. So if you do a lot of prompts or just have too much fun with them, you’re using more resources.

So, like encouraging staff to use those systems meaningfully and thoughtfully, and eliminating the number of prompts you are doing. So make sure you use your known prompts, you know one more comprehensive prompt, and 10 prompts that every time it will generate that use of a system. So there is some personal accountability, but for optics and discussion, you can make sure you’re acknowledging that, as a public health department, you know it is a resource-intensive tool. And you acknowledge that, unfortunately, there is, you know, not one solution right now, but you’re working to identify those solutions. You’re going to try it at an organizational level, to walk the talk, as well as the system level. So there might be some things you can consider as you’re wrestling with this.

Audience Member #6:
So thank you for this session. I’ve never known that there are policies from the CDC for this. I come from IBM consulting, and we have done a lot of agentic AI assets like document intelligence to live insights to make you more productive, like whenever you are talking on the call center, and you want the insights coming up from your data, that’s it. And I wanted to point out, the more the data is coming, the more concern is the data quality, and then the policy, what it governs, and what we want to have with the data security. So that’s the big challenge.

But then I was just going to tell you that we have a lot of agentic AI assets, which can change, as you have seen, this AI revolution is happening everywhere, and this industry cannot be without AI. So I think we have to take the bold step. We should have data security, and what are we thinking about all the repeatable I-agentic AI assets which we can use to make our productivity better, better health care, and better for the community. But we should not be left behind with the AI, which is very slow to adapt to what I see in the commercial world versus healthcare. It is going slowly, but I think we should make these bold steps to go under the AI revolution and modernization as a Data Architect. Thank you.

Rachel Brink:
More questions. Oh, any other questions?

Audience Member #7:
Hi, Tatiana. Was struck by your comment about how we were using small AI, and I appreciate that also. My frustration. You know, 25 years of public health, we are always behind the ball. You know, healthcare will say they’re behind the ball on AI, but they are so far advanced over where public health is in terms of using it in a meaningful way. I mean, we’re only scratching the surface of thoughts about how we piggyback on what the private industry has done with AI to advance ourselves more quickly. Are we just stuck in this governmental thing that we can’t get past? We just have to work through it?

Tatiana Lin:
That’s a very important question that I’m wrestling with. And welcome definitely others in the room that have, you know, thought about that, their insights. My thought is that we need to build that technical capacity within our governmental sector. We somehow need to attract those folks who can develop those, you know, large language models that think very innovatively.

That one level, I feel like technical capacity is just not fully there, because, and then the other pieces, we do have all of the red tape, like, I think somebody mentioned this, the IT department making policies, that’s such a standard thing that we have seen. And those policies are really typically just don’t use it, or so. I think that red tape is very cumbersome. And so lack of technology, technical capacity, plus red tape. And also, we don’t have as much public private partnerships with these big giants. I think there is some way we need to build the partnerships so they can have public health to move into that.

I wonder if in the healthcare sector, it’s more developed in a way. Because we see them coming in, you know, for cancer screenings and other purposes, you know, putting forward those technologies, and they’re getting interested in public health, but public health doesn’t have the same funding. And so I think it really comes down to that big funding they’re trying to generate. And so this is such a multi-pronged approach, but I do think that we need to resolve it at the national level, from a public health system perspective, to be able to benefit at each state level. So that’s probably what I can offer at this point, but I think it’s something we need to wrestle with because otherwise we’re going to be stuck in the small AI applications for a while.

Rachel Brink:
All right, well, let’s give a last round of applause to our presenters. Thank you, guys. And I have some last-minute announcements. So it is the end of the session, end of the day, in theory. So please join us on the fourth floor lobby here for a welcome reception and poster session, generously sponsored by Inductive Health. Throughout the fourth floor, you’re going to find many poster presentations. Please take a moment to visit with the poster presenters about their work, and then tomorrow we have a lovely start time of 8 am in the Grand Ballroom on the fourth floor for breakfast. So thank you, guys, for this engaging presentation session. Great job, guys. I have photos, and I took them of everyone, because then I felt self-conscious that I only took them of you.

Related Topics: , , ,