AI in Clinical Practice
- SLP Nerdcast
- May 26
- 48 min read

This transcript is made available as a course accommodation for and is supplementary to this episode / course. This transcript is not intended to be used in place of the podcast episode with the exception of course accommodation. Please note: This transcript was created by robots. We do our best to proof read but there is always a chance we miss something. Find a typo? Email us anytime.
[00:00:00]
Intro
Kate Grandbois: Welcome to SLP nerd cast your favorite professional resource for evidence based practice in speech, language pathology. I'm Kate grant wa and I'm Amy
Amy Wonkka: Wonka. We are both speech, language pathologists working in the field and co-founders of SLP nerd cast. Each
Kate Grandbois: episode of this podcast is a course offered for ashes EU.
Our podcast audio courses are here to help you level up your knowledge and earn those professional development hours that you need. This course. Plus the corresponding short post test is equal to one certificate of attendance to earn CEUs today and take the post test. After this session, follow the link provided in the show notes or head to SLP ncast.com.
Amy Wonkka: Before we get started one quick, disclaimer, our courses are not meant to replace clinical. We do not endorse products, procedures, or other services mentioned by our guests, unless otherwise
Kate Grandbois: specified. We hope you enjoy
Announcer: the course. Are you an SLP related [00:01:00] professional? The SLP nerd cast unlimited subscription gives members access to over 100 courses, offered for ashes, EU, and certificates of attendance.
With SLP nerd cast membership, you can earn unlimited EU all year at any time. SLP nerd cast courses are unique evidence based with a focus on information that is useful. When you join SLP nerd cast as a member, you'll have access to the best online platform for continuing education and speech and language pathology.
Join as a member today and save 10% using code nerd caster 10. A link for membership is in the show notes
EpisodeSponsor 1
Kate Grandbois: Hello everyone. Welcome to SLP Nerd Cast. We are very excited for today's episode because it's a, we're gonna be really unpacking a topic that is central to our modern sys, our modern lives. [00:02:00] AI is here, it's not going anywhere. And this has a lot of intersections with our clinical work, all of the ethical things that go along with it.
And we have the pleasure of welcoming, not just an expert in artificial intelligence, but she is also part of our field. Welcome, Dr. Ashley Dawkins.
Ashley Dockens: Thank you. I'm happy to be here.
Kate Grandbois: We're joined today also by our resident doctor of Speech language pathology, Dr. Anna Paula Mui. Welcome Anna. Paula. Thank you. Um, would you all like to tell us a little bit about yourselves before we get started?
And, um. Presenting about it all over the United States, so I'm happy to to join you guys. Oh, and I'm also, um, I'm chair of the AI task force for, uh, the Council of Academic Programs and Communication Sciences and Disorders, which is part of the group that's trying to get some AI information out there really for the field, especially from the academic side of things.
So that's me,
Ana Paula Mumy: and I am Anna Mimi. I'm program director and associate professor at a small private university in East Texas, and I have been an SLP for almost 25 years.
Kate Grandbois: I'm very excited to have this conversation. Before we hit the record button, I told a little story about, uh, someone I know whose child got caught using artificial intelligence, um, for a school project.
And it was obviously very upsetting to the parents. And there's, you know, no [00:04:00] question that this tool that we now all have access to has some serious ethical considerations. And, um, I'm very excited to unpack how that intersects with our clinical work. Before we get into it, I do need to read our learning objectives, so I will go ahead and do that and then we will, we will get started learning objective number one.
Identify current AI applications that might be used in speech language pathology, practice learning. Objective number two, describe the ethical considerations in AI implementation, including privacy and consent and learning. Objective number three, describe tasks suitable for AI assistance versus those requiring human expertise disclosures.
Ashley's financial disclosures. Ashley received an honorarium for participating in this course. Ashley's Non-Financial Disclosures. Ashley is head of the AI Task Force for the Council for Academic Programs in Communication Sciences and Disorders. Kate, that's me, my financial disclosures. I am the [00:05:00] owner and founder of Grand Wat Therapy and Consulting LLC and Co-founder of SLP Nerd Cast My non-financial disclosures.
I am a member of ASHA SIG 12 and serve on the A A C Advisory Group for Massachusetts Advocates for Children. I'm also a member of the Berkshire Association for Behavior Analysis and Therapy.
Ana Paula Mumy: And my financial disclosures are, I received, uh, compensation from SLP Nerd Cast for my work as Asha CE Administrator and as the SLPD on demand. And I'm employed by East Texas Baptist University and I have no non-financial disclosures.
Kate Grandbois: I think I'm thinking about where we even start this conversation because Ashley, as you mentioned before we got started, you could talk about this for days and, and I believe, I believe you, it's a really big topic.
I'm thinking about how relatively new this is in the last, you know, I've been practicing since the early two thousands. Um, I'm just, you know, thinking about how quickly our world has changed or how much our world has changed in 20 years [00:06:00] and the, the invention of AI really in the last, you said two, two and a half years.
I mean, that is a significant change. I. I wonder if you can start our conversation by talking a little bit about, not necessarily where it came from, but what tools you see integrating into clinical practice or how you started seeing this kind of transition over into our field.
Ashley Dockens: Yeah. I mean, what I'll say first is really artificial intelligence as a whole has been around a long time.
Uh, that's been around since the 1960s. Um, generative AI on the other hand is, is really that new piece, and that's the idea of an AI that can create and create new content based on things. It's been trained, now, it's not. Just making, it's not sentient, it's not making these, uh, thoughts for itself and creating, uh, and the way a human would create.
Um, but it's taking, uh, information that it's learned from thousands and thousands and well [00:07:00] really trillions of bits of information that it's received from public information that's out there, that's audio, visual, written texts. I mean the works of Picasso, the works of great philosophers, including people like myself as well, who aren't great philosophers, but like might have public information out there on the web.
Anything that's public has really been pulled into these, and they found that when they. Pulled more and more information in. Not only could they get AI to be predictive, you know, like when you're in an email and you say blah, blah blah, and it says to hit tab to finish this. You know, how are you, I was really needing, and then it suggests something that was initially what they were trying to do was really improve those predictive type pieces.
But when they changed the transformers and all of the things that assess it, all of a sudden they found it could not only help predict, but it could also answer, uh, it could also reply, it could also create. Now there's a lot of [00:08:00] arguments about whether that's fine, that they've taken all this public information that people didn't really.
Give permission to be used in this manner, to train it. And they'll say, you know, people will say, well, I'm not gonna use it because ethically I feel uncomfortable with the fact that they've pulled, pulled this without consent. And that is a whole discussion. We could go for years down that road. The problem is, is we can't undo it.
And so that toothpaste is out of the tube, is kind of the phrase I always like to use. It's here, it's present. And now that it is something that could be used to, for example, speed up workflow. We're finding that more and more people who are potential employers that we might work for out there in the field are wanting us to be literate in how we use it so that we can improve our efficiency and our output and the things that we do.
I mean, you, you guys know, you've, you've had caseloads that were too big and you have no time for the things that you really need to do. But what if there's a tool that can help you speed that up? So certainly in [00:09:00] speech pathology, we're seeing, you know, better enhanced AI speech recognition for transcribing things and, and such, you know, uh, that exist out there.
Different programming that would allow us to do a lot of those things that we used to do more heavily by hand. And we had some of those things out there already, but they're, these are really improved. Um, but really where I see the biggest. I guess the biggest potential impact is for, um, clinicians to go and use, especially the chatbot based information to really help create, um, materials, give ideas, um, to help with even reporting.
Now there's a lot of ethical privacy things we have to be very careful of, but really the primary tools I see being things like chatbots, like chat, GPT, cloud, ai, uh, Gemini, perplexity, um, Microsoft copilot, those kind of things. I actually see being, uh, the bigger assistance in the [00:10:00] workflow, uh, workflow of how our clinicians are working, uh, out there in the field.
Um, but there are also, I don't know, yeah, I think we're all enough to remember the.com boom. Sometimes I'm talking to people much younger than me. Uh. We're seeing a lot of that same thing. You remember how like everybody suddenly had a website and everybody suddenly had, you know, and then some of that sort of exploded and we found out some of it wasn't really quality and some of those people lost a lot of money from trying to grow too quick.
I guess there's some of that we're seeing with AI too. Um, so I would say just, you know, with every AI you use, always look and see what their privacy and security is. Uh, look and see where they store this information, where is all of the things you're putting into it going, because that's, you know, a lot of people worry about the output and how we're using it, but we also have to worry about what are we putting into it.
And that's the part, you know, that I try to stress probably more than anything else. Um. [00:11:00] You had a second part of that question and I've lost it.
Kate Grandbois: I, no, I, I'm, now I'm fascinated and I wanna ask you five other kind of off topic. Go for it. Questions. Go for it. Um, I, I think, you know, what's really interesting is, you know, there are some relatively obvious, and we haven't really started our conversation yet about the, the, um, you know, the consent and ascent pieces of this, but I, I think that the, it's, there are some obvious, like HIPAA compliant things.
When you think about clinical work that's kind of probably at the forefront of all clinicians' minds, hopefully, at least because we're trained in hipaa, et cetera, et cetera. Uh, I think it's a really interesting, um. Consideration to think about the other non HIPAA compliant storage related to using your ai.
Um, I was going, what I was gonna ask as part of that original question was just how we see this leaking into the infrastructures that we have for clinical practice. As an [00:12:00] example, I own and run a private practice. We use an EMR software system for our daily notes and our billing. And I am getting hounded by these people about the upgrade to use their AI enhancements or the integration of ai.
And I think that, you know, myself and my staff included, we're like, well, what, what is that? Like, what, what does that mean? You know? And I see Instagram ads for. Oh, I can't remember the name of it, but I'm sure you know where it will listen to your whole conversation and then write your, it will listen to your whole session and then write your note for you.
Right. So I guess I'm just wondering like how you see some of this technology leaking into our actual industry, I guess would be a better word than profession. Yeah.
Ashley Dockens: I mean it's, honestly, it's leak leaking into everything. And, and, and, and one of the things I will say that has been most concerning for me is seeing people suggest different ways we should use it or suggest integrations.
That it's obvious that there's not been full thought consideration on where's the data going [00:13:00] and what, how, how are we keeping it private? And there are a lot of companies that do very much advertise how they keep her privacy and security, but often that requires a paywall, right? Like whatever service you're using.
You may can use free chatt BT, you can use free co Microsoft copilot, you can use free, all these different things you can use free. But what are, what are they doing and how are they storing it? And same goes for companies like the ones who do your EMRs. Like are they vetting and are they providing you with what that looks like?
And, and I'm actually finding often it's not. And I think that's a question we really need to be asking. I mean, I, I went to a conference, um, where they were suggesting how it could be used in, in higher education. And, and these people were wonderful that did this presentation. I really enjoyed it. I thought it was all fantastic.
But when it came towards the end of it, I realized they've not talked about how are they keeping, in this case it was student information, how they were keeping it private. So I asked the question and they were like, oh, we really hadn't considered that. [00:14:00] So I think sometimes we get ourselves so excited about what it can do, because it is amazing.
I mean, it's amazing what you can do with AI that sometimes we forget those very important. But sort of background pieces that we take for granted in everything else that we do. I would like to hope that all of the different ais that are being integrated into things like EMRs are being vetted and that if you asked, they could give you that information.
But I would say for people who are out there in practice, that's a good question to ask. When they keep coming at you with do this upgrade or pay for this additional whatever, and then you'll be able to use one. Can they show you what it's gonna actually provide you? And do you really need, it may not, um, a lot of people add AI to add AI 'cause it's the new shiny thing, right?
Um, but two, is it, is it private? Is it secure? Where is this being stored? What would happen if y'all had a leak? What would, you know? Um, I constantly tell faculty when I'm training them, [00:15:00] always assume that whatever you put in could be found out. Um, you know, 'cause one single leak of chat GPT and it has your name and your email on there.
Are you comfortable with what people have seen that you've searched? I would want to feel comfortable with any of those things, whether it was integrated or not. And, and I do think that, that we're gonna see it more and more integrated, and it probably should be. I just hope that the people who are doing it are being careful at the way they're, they're doing so.
Um, because yeah, that's, and here's the deal. Here's what I'll say is there's already a precedence out there that AI can't be held responsible for its mistakes because it's not sentient, right? It doesn't think about whether it plagiarizes, it doesn't think about whether the answer was right or wrong. It doesn't think about whether there's a privacy concern.
It's just giving you the answer based on the prompt you've given it. So if AI can't be held, held accountable, well then you'll be held accountable. So if you've put something in there that could have been private and it could [00:16:00] violate, um, HIPAA or in the academia ferpa, uh, you're gonna be responsible for that, even if the AI did it.
You know? And so I think, um. I think we overlook some of those really important pieces as we play with this. 'cause we're like, look at this cool thing. I can write, you know, an entire poem about my mother in five seconds, which is great, but like, does your mom really want all that information about that?
She loves onions, she has bad breath in the morning, whatever you're doing and, and did you
Kate Grandbois: ask it to write a nice poem or a not so not so nice poem? Now I'm like thinking back on what I put in there. I'm like, oh man, I don't even know. I don't even know what I've asked this thing. Oh my gosh. That's wild.
That's really wild. Okay, so we're seeing it kind of leak into our. Our EMRs, you know, we have it at our fingertips. A lot of us probably have this thing on our phones. As I was mentioning, you know, this conversation I had over the weekend, a friend of mine, the sprinklers went off in their house [00:17:00] at a, they had a fire alarm, um, or, you know, fire alarm got off and he asked Chad GPT how to turn the sprinklers off in his house.
I mean, I, I feel like it is everywhere. Um, I get ads for using it to do research, to do lit reviews and all of these kinds of things. I'm wondering how you see this influencing the assessment and diagnostic process or even the clinical process in terms of, you know, if it's here and if it's part of our professional workflows, how is this gonna intersect with our clinical decision making?
Ashley Dockens: Well, certainly it's something I think we have to do with caution because again, it, AI. Was not made to tell us the truth. It was made to give us the most likely result. Meaning whatever data it's pulling from whatever is the most likely thing from that data, that's what it's gonna say. And if it doesn't even know the answer, it will create an answer based on similar areas of something.
So if it [00:18:00] can't find something in speech pathology, for example, it might say, okay, audiology's a related field. I'll look at the information from that and I'll draft my answer. But it won't tell you that it'll just draft an answer as if it's true about speech pathology, which is why it's so important for us to really vet the information we're getting out of it.
But, but that's, and, and by the way, that's called when it creates fake information, it's called an AI hallucination. It's not seeing fake things, but it's creating fake things. Uh, and so AI hallucinations are concerned because they sound so convincing. They sound so right. So when you're bringing a, uh, a, a patient or, or a client into that.
Mix. Now you, you have to be concerned with, if I'm gonna ask things that may then impact what I do with this client, I need to really still be able to vet, would I agree with it in my clinical expertise, right? When we talk about the evidence-based triangle, we always talk about one of those legs actually being us.
And, and I think [00:19:00] oftentimes you'll find certain people who will leave fully out the research or they'll leave fully out us. And both those pieces are still very important in that triangle patient values, right? Being the other, um. Anything that we do, we're gonna have to do with caution. However, can you absolutely, today, right now, go into something like Claude or chat GPT and give some generic information about a patient.
You know, I have a patient who's in their mid fifties and struggles with this, this, this, this, and has this very unique problem of x and blah, blah, blah. Like, what testing may I want to do further? Uh, because they've not yet had an assessment for maybe you have some other concern. They've not yet had an assessment for autism.
What, what testing could I do? It will give you with rationale even what it would suggest, and y and it might not be things you had thought of that then it triggers you to go, wait a minute. Yeah, I should do that assessment. So, so there's some really great idea generation you can get from [00:20:00] AI to say. Hey.
Yeah, like I could do that and that might help me with this patient. Same with even then the follow up we do with therapy. Like, okay, I have these kinds of results. The person has failed the, you know, you can't say who the person is. You can't say where you are, right? You have to de-identify. But like I have a female patient who is 14 and has these struggles and there's this unique problem they have that I've never dealt with in therapy.
In addition to this issue, like what therapeutic techniques might I take to better serve this, this, this patient? And it absolutely will give you very quickly all sorts of recommendations. And the great thing about AI is it doesn't get tired of you. So you can essentially just keep going back, be like, and gimme more, and gimme more and gimme more.
And it will keep giving you more and it might trigger that spark within you to go, oh, that is something I would wanna do. Right? So I think it's still important for us to look at those things and not just do the first thing that pops out and say, yeah, yeah, yeah, yeah, that's great. But to modify it to [00:21:00] what we need.
But when we get stuck, especially with those trouble patients or what have you, absolutely can it immediately, you know, for that matter, if you kind of de de-identify enough and gifts, some general things that can write sub notes, like it can do all kinds of things very quickly for you. So as far as workflow, I mean, I imagine I.
Especially those new individuals, a little younger in the field, maybe a little afraid to ask their colleagues 'cause they don't wanna look dumb, you know, they already feel like they're an imposter. They don't feel like they're old enough and good enough to start out this, you know, it's a good place to go ask a question without judgment.
Um, and so I do see it as that as well. I think we're gonna see more and more integration into the software as we use, where we will potentially be able to, in a private, secure space, be able to put all of our regular, typical patient information and it generate what we need it to for either a report or what have you.
That may be what you're finding and [00:22:00] advertised to you in those EMRs. But again, it's a good question. Like what's the safety and privacy? Do I need to add a layer of consent to my intake forms for my patients? For my clients? 'cause you probably do, um. If there's no way to undo what you do. Like if you decide, yes, I'm moving forward with this AI, with this, and they can't have a choice 'cause it's gonna be used either way, there probably needs to be some sort of acknowledgement in there that they realize that the information is out there and in this space.
Now, if it's under a surprise, you know, private secure space, I don't expect it to show up in my search on chat JPT. But if they don't have privacy and security on it, then I could be concerned that that would happen. And that's a fair question that a patient may ask like, what happens to my information?
And I think we have to be more active than we probably ever have with anything we've ever used software wise and whatever, to say, oh no worries, here's where it goes. Um, 'cause there's a lot of [00:23:00] people who are very concerned about feeling very watched, monitored by these things. Um. In audiology. We've had data tracking for hearing aids forever, so I know if you've worn them, I know what kind of environments you've worn them in.
And I had patients and they were, it's generic information. It's not very specific. I mean, it tells on you if you've not been wearing your hearing aids, but that's about it, you know, and I had patients who are concerned about that. You know, this is really taking it further, you know? Um, so you may have some people buckle at the idea of using it as well.
And I think that's something we have to think about, um, as clinicians for sure.
Kate Grandbois: Yeah, that's completely fair. And as I, as I'm thinking, as I'm listening to you talk, I'm kind of zooming way out in my mind, right? So if we zoom out to 300,000 feet, well that's really high. 30,000 feet. Let's go, let's go to 30,000, not 300,000.
And we think about our [00:24:00] profession. In general, and I'm speaking specifically about speech language pathology since that's, that's my specific training. We've got a field where there's a staff shortage nationally. Um, there's a, a shortage of the workforce, which has created a pervasive burnout problem. No thanks.
Also to a pandemic layered on top of it, right? So we've got, you know, caseloads that are completely unmanageable, workloads that are completely unmanageable. I think the last statistic I heard was that referrals for communication disorders were up 132% or something absurd from the pandemic. Um, and on top of that, and this is kind of getting into my interest area, we've got what is referred to as the research practice gap, where the research that we're using in our clinical daily lives could be over 17 years old.
So there's this massive delay between not only are we overworked and a lot of people would argue underpaid. With with workloads that are totally unreasonable. A lot of us [00:25:00] are also operating and using old information in our therapy rooms. Right. Okay. So that's the landscape I'm looking at. And we got pro, if you look at it through that lens, we got problems on problems.
On problems, right? Not that we really do, but you know, there are, we, no industry is perfect, no field is perfect. And that is the playing field that we are on now. And here we're talking about this tool that's like, I'm imagining the. The icon I see associated with AI all the time. It's like sparkles, you know, it's like a magic sparkly wand, right?
It's like this, like exactly. It's this little firework of, of like, it's like it's gonna be a magician and come and solve all of these problems. Um, and I'm thinking about all yes. So no. Right. I'm thinking about all of these really creative ways and listening to you talk about all these really creative ways.
It could be used to reduce your paperwork time, uh, give you therapy ideas at your fingertips, do a quick literature search that maybe pulls up and summarizes research for you in a way that is readable and digestible and [00:26:00] newer. You know, I mean, I went to graduate school, like I said, over, you know, 20 years ago.
So the, what I was taught is definitely not relevant. I, I, you know, we all need to have some, some updated pieces of information and yet there is such an asterisk here, um, with, with all of this privacy and, and. Data protection and all these things. What aspects of AI use really should be limited to humans?
What is, what would you say from a clinician's perspective, aside from helping it, you know, give you that spark of an idea inspiring you? I think before we hit the record button, you called it your little muse, right? Right. That's an appropriate use, but what components of this really should be left to the human brain?
Ashley Dockens: Well, you, I mean, you have people and I have like 3000 things I wanna say back to what you were just saying. So I'm gonna come back to say them
Kate Grandbois: all, say them all. We'll stay here all day. Well,
Ashley Dockens: well first let me say this one thing. So one of the things I think that [00:27:00] you might wanna introduce yourself to and other people might wanna introduce themselves to here is, you know, you talked about, um, outdated information.
It's, it can be overwhelming even, you know, within number of cu and things we're, um, uh, recommended to have, and we have to have for things like licensure. Actually intaking new information constantly when you have such a heavy workload. Is hard. Uh, I mean, I, I'm not saying any person on, uh, listening out there or in this podcast has ever been guilty of only half listening to A CEU because we knew enough about it to get away with a quiz.
But we have, we all have done that, right? Because we picked something that we, because we have too much. We have too much. Right. Um, so there, there are ways though. There's a, there's a, um, AI called site and it's S-C-I-T-E ai. Um, it's really made with researchers in mind, but I have introduced it even to family who have nothing to do with research, nothing to do with professional [00:28:00] kind of world things because, um, essentially if you use site, uh, and I will say I think it's only free for like seven days and then you have to pay for it.
It's probably one of the only ones that I would say I kind of push to people as being so useful. I'd pay for it. Um, 'cause it, it really is, but I'm signing up for their free trial
Kate Grandbois: right now.
Ashley Dockens: Oh yeah. Sign up for the free trial. So there, there is something in there called Site Assistant, and essentially you can go in and ask any question that you want to know from current literature.
Like what is the best therapeutic technique for X, right? And what it will do is it will not only put that together for you and answer, but it will give you the search strategy it used to find it. So what words did it use, what databases did it go through? Which articles did it pull from? And it will at least usually cite 50 to 60 articles that are current and modern and say, here is sort of the general consensus of what would be the correct answer to that question.
And it, it, it, it [00:29:00] almost does like a systematic literature review for you in about five minutes. Which is insane. Uh, but if you're with a patient and they have something unusual, maybe you've not worked with it. Maybe you studied it 20 years ago in grad school, but like now it's like, because you've not had to deal with it this whole time, that's a good place to go and say, Hey, for a patient with X problem or a client with X issue, you know what, um, what would be the best, you know, types of treatment to use.
It can do that research sort of for you quickly. And then you still have the opportunity to go look at what it pulled from and say like. Do I agree with these sources? And that's a different type of AI response. That's something called retrieval. Augmented generation or rag, uh, rag is much more consistently accurate because it's pulling from very specific documents, uh, rather than just all the information that exists out there.
So it's a lot more trustworthy and it's something that could be used really quickly. So I did wanna say that [00:30:00] before I actually came to your other question, which was about like, what do we need, still need to keep the human piece in? And I would say the human piece needs to be in everything, um, for multiple reasons.
One, there's actually research that is suggesting that AI that is trained solely on AI outputs will eventually collapse on itself and not be able to answer the questions we need. So AI needs us to work, it needs our unique, authentic, human style of, of outputs to be able to work appropriately for us. So.
Um, what I will say bothers me, I guess the most about AI is when you can tell, it's very obvious someone has just copied and pasted whatever it suggested or done, whatever its initial suggestion would, would be without vetting it, it's going to give you things that sound really good. Matter of fact, research suggests it's way more persuasive than we are as humans because it's so [00:31:00] thorough and it's written so well and it uses very, um, easy to follow l and matter of fact, I can even tell it.
I don't understand what you're saying. Put it in easier to understand language and it's gonna do it right. So it's, it's very convincing, but it doesn't mean it's always the best option. So I think the human part one, when we're working with clients, like they have to know we care, right? We have to build the rapport, we have to build the trust.
You can't replace that with ai. Matter of fact, AI might turn off some people. For that rapport and trust building. So I think understanding just your human interaction of being fun and interesting and inviting and welcoming is so important. But two, that understanding that your expertise still is so important, is such a big thing.
Because you need to go in there and say, but is this what I would do? Mm-hmm. Is this a good idea? And if it, you know, [00:32:00] just like with research, sometimes research will say your best option is X. And you do that with that patient and then nothing improves. And then you say, Hey, you know, I saw once on a patient similar, but this worked.
Let me try it. And it works. It works, it works, it works. And so those pieces aren't gonna go away. That still is gonna be needed. Um, and then the other problem is, is there's a lot of, um. Almost stereotyping and bias. So it's giving you the most likely result. And I, I'll go to kind of medical research for a minute.
So research of the human body has mostly been done on men. It's not very commonly done on women. So if I go to AI and say, what are the symptoms of a heart attack? And I don't give it more specifics. I don't say for a wo for a woman, chances are the symptoms it's gonna give me are about men. And it's not doing that to be a jerk, it's doing it because that's the most likely result, right?
So if I am there [00:33:00] suddenly feeling like I'm having potentially a heart attack, and I go say, what are the symptoms of heart attack? And I read it and I'm like, I'm fine. There's bias in that, right? So same with patients and things. It's not necessarily gonna take into account your patient's cultural status.
It's not gonna take into account your regional things that you know about their, their personal life, all those little details, you know, from interacting with them. It can't know. And so it's just gonna give you this more generic, Hey, try this kind of thing. Now of course, the more information we give it, you know, whatever prompt we call those prompts, what we put into it, whatever prompt you put in, really is very important.
Because if you write a poor prompt, your output's not gonna be great. And the more specific that prompt is, the better it will do at giving you suggestions. However, in an effort to maintain privacy, there's only so much we can give it, right? With a client or with a patient. So I think honestly, it's, it's really more of a starting point.[00:34:00]
Um, it takes away a lot of that background effort that we have to do in pulling things together for us. But in the end, the work, unfortunately, is still ours. Now I will say though, the value of the time of the things it starts for me is so significant. I feel like I do about three times the work I could without it.
Um. Uh, but that is still me heavily looking at it, putting, I even work to always try to put things in my voice. I might go back and read it and say, okay, yes, this all sounds correct. I agree with every single bit of it, but let me make it sound more like me. You can actually teach AI also to sound more like you, but that's a whole nother discussion for another time.
But I, I like to go back and really hammer in like, this is the way I would say it because I think it's so important that we maintain who we are. Um, you know, people are using them for blog posts and everything else, which is awesome, but you'll notice they're all starting to sound and look the same. [00:35:00] And I don't think we need to homogenize our language down to these like, basic sets of words.
I find myself avoiding certain words because I know AI uses them frequently that I did use to use. Um, because I don't want people to think that I'm just constantly using ai. 'cause I'm not always using ai. I think, uh. But will always be the most important element, especially when it comes to therapeutic, uh, practicing with fields like ours.
Um, we, we are the important piece. We are the thing we're selling, we're selling us. Um, and that's a good thing. I think we, you know, we have to, but I also get where you're coming from. You know, people are bogged down. They've got too much, can it help them? Yes. Can it, you know, the, the people I've seen kind of be early adopters who dropped off are the ones who hoped it would just do everything for them that they could copy and paste and go on.
It's not going to be like that. Um, but it is going to give you those starting points, especially when you're hung [00:36:00] up, you know, staring at the blank sheet of paper, staring at, you know, this therapy plan you need to write, write out so that you can get moving and you're like, mm, I don't know where to go.
This is a good place to go and pull those, those ideas for sure.
Ana Paula Mumy: As you were talking, I was just thinking about, you know, just the human piece being the critical appraisal piece, right? Just we still have to critically appraise what we are looking at. And even with the evidence, when you're pulling that evidence to say, okay, now I need to appraise, is this good evidence or are they really pulling?
What's good? Um, are these sources truly credible or reliable sources? Right? So just that component of that critical thinking that you can't replace, um, whether it's a clinician or a student or, or whoever it may be.
Ashley Dockens: People ask me often like, will AI take over my job? And, and speech pathology and audiology are, are one area that I don't see it ever taking over your [00:37:00] job.
I do potentially see some practices, like if you're looking for a job, I see some practices highly considering people who understand how to use ai. Considering them higher, more highly over you, like they might be more likely to get the job because you're, you're not AI literate than they are because there is a workflow increase that we see when people use it effectively.
And who doesn't want that in a, in a business, right? We, we are dealing with shortages. We need some way to work with, with more people in less time, with less people to work with 'em. And this is one of the ways that they might do that. So will you lose your job to ai? Maybe not, but would you lose a job to somebody who knows how to use ai?
Absolutely. I, I absolutely think so. Um, I think there's gonna be an expectation over time. It's gonna be just as normal as everybody having a cell phone on their bodies, uh, that, that it's just gonna be integrated, uh, more and more and more and, [00:38:00] and we're just gonna have to get literate or be in trouble.
So that's pretty much it. Yeah.
Kate Grandbois: We've talked on this podcast a lot about the importance of trust between therapists, you know, in terms of, you know, what makes us, to your point, we're selling us, like what makes us effective, right? And there is something about, you know, our end goal is to support people with communication disorders and to help them make progress towards their goals.
That's, that's our job. If you boil it way down and those things are not gonna happen unless we are establishing patient-centered safe spaces where individuals can trust us, you know, move through their vulnerability, you know, deconstruct all of those able ballistic, you know, ableist structures. And that is something that a robot is not going to be able to do.
Right. So that makes a, that makes a lot of sense. Um, I think where it gets really interesting is what the, what the role of [00:39:00] AI is behind the scenes. And for me. You know, another thing that we've talked about on this podcast before is this concept of working at the top of the license. Right? So really trying to be efficient with your workload by delegating things that you don't need to be doing and focusing on what you're trained to do.
Um, and as a person who has never really had an, an, you know, a, a dedicated admin, you know, I don't, I don't have an, an administrator at my fingertips all the time,
Announcer: right?
Kate Grandbois: I think it's been, it's been an interesting exercise to kind of think about what can I take off my plate? What things are administrative, or I don't wanna say boring, that's not the right word, but nonclinical tasks that und I can Mundane.
Mundane, thank you. Yes. That, which word and all, um, what kind of things can I take out of my brain and put somewhere else? And for me, these are things that. Like brainstorming tasks. So for example, you're feeling [00:40:00] like you're in a rut with a client and you don't know what to do in therapy. Give me 10 ideas that a 10-year-old boy might like or give me 10 ideas of, um, toys that make a certain noise that, I mean, I'm obviously in pediatrics, these are all the ideas that I'm coming up with.
Uh, I also feel like documentation is something that could be relatively easy to, to flex a robot for, right? Because it's, it's, you know, maybe it's gonna need a human eye or like you said, kind of shifting it to be in our tone or making sure it's accurate. But I don't need to spend 10, 15 minutes formatting the document or, or whatever it is.
Ashley Dockens: Honestly, you can give in an example, like de-identify an example of how you do, like, for example, your report. You know, your documentation for, for your, your clients. You can give it an example. Now, de-identify, always de-identify. You could give an example and say, essentially today I saw [00:41:00] a patient. And, and, and, and again, keep it generic, like, uh, in their, their mid-teens who has these issues.
These are the things we worked on. This is what was successful in today's. Here's the things I want, like just bullet point list. Here's the things I'm wanting to do next week and that they need to work on from home. Put it into this report format. Go. And then all you end up having to do then is go back and add in those identifiable pieces, you know, that are more specific about them and you're finished as long as it was accurate.
And again, the accuracy check you, you write these, you know, and when you're giving it the information, it does better. So because you've given it these pieces, chances are it's gonna do well anyways. But you fact check it, add those names in, you're good to go. Um, I mean. Those who are in private practice, you also have to be your own advertising agency.
You have to be your own business person. You have to be, you didn't get any training for any of that. So AI is a great place [00:42:00] to go and say, how might I create an ad campaign that is effective in this region? You know, for this age population, and it's gonna give you suggestions. You know, you can ask it to create creative names for, for projects and things you're gonna roll out.
You can ask it to write a blog post or social media post. However, I will say I would still go back and add your own voice to that 'cause it is starting to get obvious when people are using them. AI loves to give you emojis when you use the word social media. So there'll be like 50,000 emojis in the things that it creates, just so you know.
But those kind of pieces of everyday things that don't necessarily need are full human touch. They need pieces of it Absolutely can be done through that. Writing a general email to your patients that you're gonna be opening up a, an extra week to do blah, blah, blah. You can say, okay, here's this little, you know, maybe it's one sentence.
Here's the thing I'm planning on doing next week. I need to send an email to all my patients about this. [00:43:00] Writing an email, updating them and it'll be. Mostly done right? Again, is still, am I gonna just copy and paste? Probably not, but is it something I'm gonna be able to go through and just change a few things out and be done with it?
That might have taken me an hour depending on how, um, how picky I am. And some of us are more picky than others, right? Uh, that, that now is finished in 10 minutes or less, you know? And so I will say, I, I really do believe my workflow and output has increased tremendously with use of ai. Not because I use it to replace those big pieces, like you're talking about those things that we need to do, like connecting with patients.
But because I have it do all those little mundane pieces of, you know, even formatting something, if it doesn't have private information and you need something to be formatted into a bullet point list, you can give that to AI and say, put it in this format. The only thing I'll caution about that again though, is.
Knowing what platform you're using and what privacy and security you have. So if [00:44:00] it's still your intellectual property, you might not just wanna throw it in there if you think you don't want it to show up somewhere else. 'cause it is training. But there are different levels of, um, different levels of each of these different programs that you can sort of purchase into.
So for example, I have privacy and security through Microsoft copilot because our university pays for that. So I know whatever I put in there won't be training the large language model and it won't show up in someone else's search. But I wouldn't want to necessarily put the information that might be a little bit more.
Fy private into a different one, like chat GPT if I don't have, I've not purchased that level of security that unfortunately comes with a price and usually is a monthly fee. And so I, I don't love, uh, using it for those purposes if I can help it, but I would recommend for everybody out there to have at least one chat bot that you have that level, either through your [00:45:00] business or through your personal account, purchasing something that has higher levels of security and privacy where you know where the information's going and you feel comfortable being able to share, share things like your own intellectual property of things you've put together.
Um, but I mean, honestly, even like tone, like maybe you are really upset about something and you need to tell your boss whatever, but you, you know, when you write the email, you sound like a jerk. You can ask it to change the tone of your, to be more professional or, you know, um. Write a letter of support for somebody based on these bullet points about them, or, you know, there's all sorts of just everyday things that we're asked to do that take a lot of our time that maybe we, we can push over into AI a little bit more.
Mm-hmm. Yeah.
Ana Paula Mumy: I was thinking of just in a very practical sense. Um, just random things that used to take me forever. Like I [00:46:00] needed word lists with, you know, like our loaded word lists. And I'm like, so general, you know, come up with 20 sentences, you know, on various topics they each need, you know, however many words.
I mean, I can, like you said, you know, you put in the prompt and as long as you're specific, it'll literally do it in 30 seconds, you know, where I used to, you know, sit there and write and like think, okay, I need a verb, I need an adjective. I need an out. You know, you like walk
Ashley Dockens: around asking everybody what's a word that starts with R that.
You know?
Ana Paula Mumy: Yes. And just how quickly I can come up with the materials. 'cause you know, when I think about even like articulation therapy, you're doing the same thing every day, but you have to find 5,000 different ways to do it. Right? Because you're drilling, drilling, drilling, drilling, drilling. But it's like, how many times can you drill without this child going out of their minds and without you going out of your mind, right?
So it's like coming up with different lists or words or this or that, you know, and just, it's been so [00:47:00] useful to just have the ability to do that in such a, you know, just super fast. And, um, obviously in that sense there's no, you know, I have no qualms about. Those kinds of, you know, activities on there because yeah, it saves me tons of time and it's generally accurate because it, you know, it's such a straightforward, um, prompt.
So other, um, examples of ways. 'cause sometimes I feel like, you know, just in terms of, just very practical, um, for the SLP to think about, okay, you can generate this or this, or, I mean, you know, stories, um, right.
Ashley Dockens: What are some other, we're really, we've not even really touched on things like the ais that also do image generation or audio.
Oh my god,
Kate Grandbois: those images are so creepy.
Ashley Dockens: Well, well, they've gotten better. That's the thing that's even creepier. It's so bizarre
Kate Grandbois: looking. I can't even look at it. Some
Ashley Dockens: of them now look so real. You honestly would not know. And that's a little creepy. But that you, you could [00:48:00] though, for example, have a very specific need in your, your therapy that you're gonna do to have some sort of imagery to show a pa.
Maybe you're working on pragmatics and you know you're tired of the same cards that you've used to show 'em whatever you want. Examples of the range of emotions you can absolutely ask an ai, uh, like Adobe Express to create or Dolly to create images of sad people from different, even that shows greater representation for the patient, right?
Like maybe, maybe they're from a community. You're not gonna find pre-made cards. Of people what have you, or, or they have a different cultural context and maybe they don't do the same thing that we do. Like, not everybody wants to shake hands, especially not since Covid, but even before that, right. Certain communities, certain cultures don't do that.
Announcer: Mm-hmm.
Ashley Dockens: You know, and you're talking about social pragmatics, you might not be as familiar with what those would look like. So that one, that could be a question you could ask ai. Like what are typical ways that people connect in this type of culture? You'd look in, wanna [00:49:00] fact check it, but you can ask it to provide you even references.
Perplexity, uh, AI will actually give you where it pulls the information from. So that's one I like to use when I know, I'm trying to figure out where did they get this information and is it, is it worth something? But you could, one, get that information from it, but then you could take it. To an AI that does image generation and say, create an image of a person of this type of background, who is showing happiness, who is showing excitement, who is.
So creating those materials is really, really, really quick. Creating take home things that you want 'em to work on that you're worried they won't be very interested if you just give 'em a little bullet point list, take your bullet point list, put it into AI and say, make this into an interactive handout that my patient can use when they go home this week and it's gonna do that.
Um, so all of those kind of, I mean, even in academic academics, two of my favorite things are, um. Simulated conversations and I'll, [00:50:00] I'll tell you in a minute what I mean by that. Uh, and creating case studies. So one of the things about being an academic is we are always having to give case studies for our, our students to study.
And after a while, they're all the same ones that we're getting from the same whatever books and what have you. I can create 20 different ones in five minutes that have similar complexity for an entire group of students just by asking ai. Um, or I can encourage something like a simulated conversation, which is where essentially I give the AI a role.
I say you are a patient who is. 65 years old resistant to hearing aids, you're afraid that you'll look old. You're afraid that you'll look ugly. You think that costs too much? Like give 'em all the basics of what might, you know, have a conversation with me. I'm your audiologist, and allow my my students to practice those types of conversations.
But you can even do that with someone working on, let's say, social pragmatic conversations. Now, it's something you'd wanna try first, right? As you as the [00:51:00] clinician.
Announcer: Mm-hmm.
Ashley Dockens: Would need to go and attempt and try and talk to the patient about the fact that, hey, it might have some stereotypical behaviors in here.
Some of this might not be right on the money, but I'm gonna be here with you when we do it anyways. This will give you an opportunity to talk to someone who's not me. Try out some of the things that we've, you know, used and then that can be a conversation after the fact, okay, you did this. What about that, that, that technique we talked about, what about that thing that we talked about that you could try with whatever, try that with them and see how it goes.
You know, they can go back in and change the conversation and see if it improves. Now is that perfect? No. But is it a good, safe place again where no one's gonna be pointing fingers at 'em if they don't do it right? Absolutely. Um, so there's some really unique ways I think to do that. It's just a matter of really making sure you're being very transparent when and how you're using it, and that you're present in those things.
If it's gonna be used with a client, that presence of you being there to vet it and [00:52:00] say, yes, that is, that that is how someone might respond. What would you say, you know, um, is important. Um, but I mean, really the possibilities in some, some ways are almost endless. Um. It's just as ma matter how willing you are to ask the question, uh, and how willing you are to be receptive of new creative ideas, um, really, really quickly.
You can get a lot of interesting, interesting things. That's completely
Kate Grandbois: wild. That's com I, you kind of blew my mind a little bit, thinking about AI as a conversational partner, as a learning experience, particularly for those who are students or maybe new to the field who don't wanna quote, feel stupid or make a mistake or ask a question that makes them look like they didn't do the reading or, or whatever it is.
You know, if there's like power differentials there. Right. Um, I just wanna say a couple of this back, a couple of these things back to you in [00:53:00] summary, just to kind of,
Announcer: sure.
Kate Grandbois: Again, like the lay of the land in terms of ai. So what we're hearing generally is if you are using AI as a clinician, you really wanna make sure that you're considering security and privacy at the forefront, which sounds like 10 times, outta 10 is gonna require some money, some sort of paywall, some sort of paid feature.
Um, sounds like a lot of these things are included as part of other packages that you may pay for. So for us as a private practice, we use, um, Google. We have a paid Google workspace. We've been able to add, Gemini is one of them, into Google Workspace, which I think is HIPAA compliant with if you're using it within Google.
But don't anybody quote me, please, please Google that. Um, but you might know as an expert, I don't even know.
Ashley Dockens: Actually, it probably depends on how you got it integrated. So again, I always think it's a fair question to go back to the people you purchase it from and say, Hey, can you send, most of them will be [00:54:00] able to send you very quickly a link to all of their security and privacy information.
Right. That makes
Kate Grandbois: sense.
Ashley Dockens: And, and then I would just keep that available and ready, maybe even posted like on a subsection of a website for you. Like, this is how we use ai, here's the privacy and security. Um. That. Yeah.
Kate Grandbois: Mm-hmm. Good to know. I remember Googling it, that's why I said that. But I'll, I'll look into it.
I'll, I'll dig, I'll provide as a, a lead by example over here. Um, but anyway, so you could use Microsoft copilot as if you have the Microsoft products. It sounds like you, you need to really be considering a paywall regardless of the software that you choose. It's could be integrated into something that you already pay for.
You've al also mentioned Claude ai. You've mentioned script or no Site ai, um, which is like a more of a research lean. You've mentioned perpe, perplexity, ai. I assume each of these has their own strengths. I wonder if you could talk to us a little bit about, you know, [00:55:00] what tools a clinician might really strongly consider, um, and then also if there are any trainings or other.
Moments of learning for us to make sure as clinicians, we are using this ethically and considering all of these additional things, obviously beyond this conversation.
Ashley Dockens: So, um, yes to all of that. Uh, I will say that as far as like things I would recommend, um, I personally am a huge fan of Claude. Claude though again, unless you're paying for that higher level paywall for the extra security, you just need to make sure, what I would say is any clinician could use any AI if they're using it in a way that they're considering what they're putting into it privately.
If they can de-identify and keep that private, you could really use chat. You could even use the free versions of things like chat, GPT, claw, perplexity, et cetera. It's when you're wanting to give it even more information, maybe you want to give it something that isn't [00:56:00] directly. Patient identifiable, but if someone knew the region it was from, if it could still potentially be traced back to somebody, that's where I definitely would want those, uh, security and private, uh, pieces.
But, um, you know, I I I'll say if you want to know where the information's coming from, co-pilot and perplexity are my go-tos for that because both of them, when they give you a response to any question, will give you the links they pulled things from. And if you don't like what it's pulling things from, you can redirect it and say, please respond again.
But only using peer reviewed publications to give me that response. And then it'll give you the new things it pulls from. However, I would always check those 'cause one, it can create completely falses. Doesn't exist. Articles. Matter of fact, it, I was playing with things in an area I published in and it, it said an article that I had written in a journal I had published in before sounded like something I would've wrote.
I would [00:57:00] love to have taken credit for it, but it didn't exist. But it was put into a beautiful, a PA format. It looked real. It was, you know, um, but it was not, and I cannot add that to c my CV, unfortunately. Um, but, um, okay. Kate, I've lost my brain. What, what, what, what was your primary question? Um,
Kate Grandbois: what tools could we consider?
And then what, um, are there additional trainings? Are there any additional, you know, resources out there for clinicians to think about if we are gonna use them to make sure we're doing it correctly and ethically? So as
Ashley Dockens: far as tools, I would say Claude sounds the most human-like is gonna give you the most human-like response output.
So if I'm creating something that I want, like, uh, a letter that I'm gonna send to my patients or something I'm gonna put on social media, I'm probably gonna use that. Uh, if I'm wanting to do simulated conversations, I'm probably gonna use it or chat GPT, um, because they both do best with that. If I'm wanting to know again, where the information's coming from, I'm using Microsoft [00:58:00] Co-Pilot, I'm using perplexity.
Um, if I, uh, you know. Want to use something I already have. I definitely just wanna find out what their privacy and security is in the AI that they're implementing. Um, as far as where to get training, there are trainings all over the web, but not all of them are created equal. Uh, so I will, one, I'll share a Padlet with you guys.
Uh, I don't know if you're familiar with Padlet. It's kinda like Pinterest on steroids, but, um, it's one I've been using and when I'm presenting to faculty all over the place, um, that has just resources from AI there. And there is a column on trainings. However, I will say most of those trainings are very specific to higher education.
Um, but the, um, AI task force for Capsid, that's really part of what we're trying to do is put something together where there's a repository of training specifically for our field. Or that are related to our field. So that will be coming [00:59:00] up on a website through Capsid soon. Um, and so I would say watch for that.
If you're coming to Capsid, anybody out there, I'm doing two, uh, two plenary speeches on ai and they will be both clinical and academics. There'll be a clinical section for that. Um, and, and you're certainly welcome to either come to that or listen. I also just have, uh, trainings that I've done that may not appear.
If you just come in, you'll be like, okay, this is too higher ed and this doesn't fit me. But honestly, if you learn the skillset from those trainings, it's still gonna apply anywhere. So, uh, our, our website, which I can share with you guys so you can put it out there, uh, for our Center for Innovation and Teaching and Learning has all of our recorded trainings and y'all are welcome to watch those as well.
Um. But, uh, more and more I noticed, uh, groups like Asha are starting to invite more people in with ai. And so that's the kinda list we're trying to create on our Capsid [01:00:00] website for you, for you guys to follow. I, I'm sad to say that's not fully put together yet. Um, but I will say, and I don't mean this negatively about anybody else you're gonna hear from on ai, I will say that no matter where you get the training, always consider whether they're taking into account things like privacy and security.
'cause they can teach you some really cool things about some really cool products that exist. 'cause there's tons we haven't even remotely touched on today. Um, but they might not be, they might not be as private and secure as we need them to be, to meet the standards we have to uphold in our profession.
So really, really ask those questions. If you're at a live. Presentation don't, don't feel bad to be like, Hey, but what do we do about privacy and security? Um, you know, what do we do about ascent? How do we get, make sure our patients understand what we're doing? And that definitely, I think it ends up needing to be something we're gonna have to add to our intake pa packets.
Patients need to be able to sign off on it just like they [01:01:00] understand their, their privacy rights. Um, and so that's something, you know, that hopefully we'll be able to put some more information out from our group, but, um, that really isn't set in stone yet. So, uh, each individual really needs to consider for themselves.
Um, there are all other sorts of ais. I I, we don't have enough time to talk forever, but I will say, um, one of the other things I find really interesting is there are ais that will create music. And if you're needing a patient to learn something and they have great musical memory, you can create kind of almost study guides for them by giving the ais, um.
Something to create a song from. So the one I use for the most part, it is mostly free. It's got credits, it's called uio, UDIO, ai. And essentially you can go in and say, hey, to the tune of whatever, or to make it a rock song or, uh, you know, create this song about these things and give it either the lyrics [01:02:00] directly or what you want it to be about.
I've played with it for pa for students and had it write songs about the eight cranial nerves and what they do. And, uh, I've, I've done it for other things, but sometimes that's what you can get a kid interested in or patient interested in to, to really learn more. Um, and I know I had great musical memory as a kid.
I still remember the preamble of the Constitution for no reason because of Schoolhouse Rock, right? And so, um, you know, there's just all sorts of possibilities, um, for us. And I, I would say my. Best advice is to get out there and stick your fingers in it. 'cause until you have tried it and, and played with it, um, you're not gonna know what it's capable of.
And, and the other thing I'll say is garbage in, garbage out. So if you don't write good prompts, that does matter. Uh, you're not gonna get great responses. So if, if, if you don't get it right the first time, you might even [01:03:00] ask ai, how would you write a good prompt to get it to do this? And it will give you a prompt that you can then put in and do again.
Um, and I guess the, my last little tip that I wanna share with y'all is a lot of people try to use one ongoing discussion in their chats with ai. Like they'll have chat GPT and they'll just ask it. All these unrelated things. Whatever chat you're in, it's learning from. So if you've asked it about this patient, now you're asking it about a different patient.
It's gonna color the response based on whatever you asked before. If you don't want those or need those to be connected and you feel like, wait a minute, it's kind of recommending some of the same things. Start a fresh chat. 'cause chances are it's just being influenced by something you've already asked it.
You might have asked it to give you a recipe for these ingredients that you have in your house, like. And, and it's over here thinking about something entirely different than what you're asking it now. It's not gonna give you good output. So [01:04:00] always start a fresh thread would be my, my go-to for those things.
And I know that had nothing to do with what you asked, but I wanted to say it so that everybody knows.
Announcer: I did not know that. That's good for me to know.
Kate Grandbois: Yeah, all of this is incredibly helpful. Um, you know, I feel like we can, regardless of the tool that we're using, we get more out of the tool if we understand how it works and we understand, you know, the man behind the curtain, so to speak.
Not to use too much Wizard of Oz kind of imagery here, but, you know, understanding the fundamentals and the, the guts of something and how it works can help us leverage it to its maximum potential. So I think that that was incredibly helpful. Um, in our last minute, do you have any final like suggestions or, I know we've, you said you've.
You speak about this topic for days on end. So I mean, we're squeezing just as just a tiny little taste in one hour. Um, but do you have any final pieces of advice?
Ashley Dockens: Yes. Document, document, document. And what [01:05:00] I mean by that is not the same way we document necessarily for a patient, but they're gonna be prompts that work so well for you and you're gonna forget what they are.
So I encourage everyone to have one word document that when something works really well, that gives you the best suggestions, take that prompt, copy it, paste it into that document, use it again later, because the way again, that you write prompts makes such a difference. And so if you're getting good results.
Don't reinvent the wheel, use that same manner again. So I encourage one that two, if anyone ever questions you like, how did you come about this? You wanna be able to say openly, I used AI and this is the questions I asked and this is how I've vetted what it gave me. Uh, 'cause I think we are coming into an era where.
We just have to be really careful that people understand where we got information. You know, we were talking a little bit about the fact that AI has other types of ai. The last thing I guess I'll say, and I say this because quite a few people who listen to this may work with especially elderly or [01:06:00] compromised, um, individuals who don't necessarily have the mental wherewithal to really be very safe and secure in how they interact with things.
Um. There is a lot of potential for AI to be misused as scams. Um, I can clone a voice now. Uh, we already could clone phone numbers, so I could already call from your phone number when it, you're not calling from somebody, but now I can call from your phone number and use your voice. Uh, now we have, you know, we saw it even in the politics that have just recently happened.
Both sides were having all sorts of things come out about them that weren't true because we no longer can exactly trust for sure what we see or what we hear anymore. Video output, uh, maybe not real. Uh, audio output might not be real. So, so really always dive deep. Look for second and third and fourth sources before you choose to believe something's true.
If something sounds off, it probably is and be very transparent in your use so that people [01:07:00] understand. But it's. It's a beautiful tool. It does so much good. You just have to be really open and honest about how you're using it and how you're making sure it's being used in a way that you're proud of. That would be, yeah,
Kate Grandbois: this has been so eye-opening.
My brain's exploding everywhere and I, I feel empowered with all these resources and also at the same time, very undereducated about all of the things that I now need to have a better understanding of. So thank you so much for spending time with us. This was incredibly helpful and we're really so grateful.
Ashley Dockens: Super happy to do it. And if you need more, you know where to find me, um, because I, you know, those questions just come up. They come up, yeah. Uh, I will say this is a topic that every time I've spoken on it in a public space, I need at least 30 minutes for questions because just like you said, they, they get ideas, but then they're like, wait a minute.
So don't feel bad to reach out. But I appreciate y'all having me. It was fun. [01:08:00]
Kate Grandbois: Thank you again so much.
Ashley Dockens: Yeah, you're welcome.
Ana Paula Mumy: Thank you so much for joining us in today's episode, as always, you can use this episode for ASHA CEUs. You can also potentially use this episode for other credits, depending on the regulations of your governing body. To determine if this episode will count towards professional development in your area of study.
Please check in with your governing bodies or you can go to our website, www.slpnerdcast.com all of the references and information listed throughout the course of the episode will be listed in the show notes. And as always, if you have any questions, please email us at info@slpnerdcast.com
thank you so much for joining us and we hope to welcome you back here again soon.
.
Comentarios