AI That Sounds Relevant: Reviews, Sentiment & Enterprise Feedback at Scale

Host Bio

Seth Earley is the Founder and CEO of Earley Information Science, where he focuses on knowledge strategy, enterprise data architecture, and information findability—work that directly shapes digital service quality and customer experience outcomes.  Over more than 25 years, he has advised Fortune 1000 organizations on reducing “information chaos” that undermines digital transformation and performance.  Earley is the author of The AI-Powered Enterprise, an award-winning book on applying ontologies and knowledge engineering to scale intelligent systems (including recognition via the Axiom Business Book Awards).  His writing and thought leadership have appeared in venues including IEEE’s IT Professional (where he previously served as an editor) and publications such as Harvard Business Review, and he co-authored Practical Knowledge Management (IBM Press).  He also hosts the Earley AI Podcast, which interviews practitioners on what’s practical—and what actually works—when deploying AI in real organizations. 

Summary

Seth Earley frames this conversation around a practical leadership question: as AI reshapes how organizations manage information, how do you use it to deliver “more meaningful interactions” and “better feedback from the marketplace” without falling into shallow automation? He opens by emphasizing the intersection of customer engagement and customer feedback, positioning it as a real operating challenge—not a novelty project.

A core theme emerges early: many organizations fixate on whether AI “sounds human,” but miss whether it’s actually useful to the customer. George Swetlitz argues that leaders “don’t really understand what’s the best way to leverage AI” in customer sentiment, because they optimize for human-like language instead of relevance. Earley immediately validates the distinction: “That’s an interesting distinction,” and then spells out why it matters—“It doesn’t matter if you’re sounding human” if you “not [are] addressing my problem.” He punctuates it with a host-driven standard for quality: “Give me something that’s relevant and meaningful.”

From there, the discussion uses online reviews as the proving ground for what “relevance” means at scale. Swetlitz describes how RightResponse AI aims to help large organizations become more efficient while still improving customer experience outcomes—especially in environments where central teams are responsible for massive volumes of public feedback. Earley is explicit that he wants substance over sales, inviting “a little bit of a company commercial,” but “not too sales,” and repeatedly brings the conversation back to what’s different in the approach and where the measurable value shows up.

Why sentiment analysis changes when you treat context as the unit of meaning

Earley presses on the point that “people have been doing sentiment analysis for a very long time,” so “what is different now?” Swetlitz’s answer is that modern AI can interpret phrasing in context—especially in cases like sarcasm or mixed comparisons—where older NLP approaches can misattribute sentiment. He gives a concrete example: if a reviewer complains about one restaurant and then praises another, traditional methods may struggle to separate what’s relevant to the business being evaluated; the newer approach does better at isolating the relevant portion.

Operationally, Swetlitz describes a more granular pipeline: reviews are broken into phrases, phrases are filtered to keep what’s actually about the business, and those phrases are mapped to a set of customer-defined “topics” (which function like KPIs). He emphasizes this isn’t only for analytics; the same structured understanding gets reused in multiple places—“when we’re generating a response,” “when we’re doing a monthly summary,” and other recurring reporting/feedback loops. (The transcript doesn’t label this as “percent positive,” but the logic presented is topic-level diagnosis: what’s being talked about, how it’s being framed, and how that varies by location and competitor.)

The “facts” approach to avoiding generic AI replies

Earley highlights a common failure mode of generative AI: it answers based on a general “model of the world” and “doesn’t necessarily know the details behind what you sell.” He asks how Swetlitz contextualizes responses with company-specific information.

Swetlitz responds by critiquing the current state of automated review replies. He says consumers are reacting negatively to responses that merely restate the review, describing the reaction as: “This is really irritating ’cause you’re just taking up space on the page.” In other words, the AI may sound polite, but it adds no value. The proposed fix is a structured, organization-specific “library of facts.”

Swetlitz explains the onboarding workflow: the system reads the company’s full review history to learn what people consistently talk about, reads prior company responses, and reads the website to identify information that would actually help a prospective customer who is scanning reviews. Those are turned into draft “facts,” which the company then reviews for validity. After that, each new review is analyzed against the fact library, and the response incorporates the subset of facts that are relevant—“one fact, three facts, zero facts”—rather than repeating the customer’s text.

Earley’s validation here is direct and editorially useful: “It’s a more meaningful answer,” because “you’re not just repeating back their question.” He characterizes the goal as “a personalized presentation of information that’s contextually relevant” to the specific review.

Scale, staffing, and the “human-in-the-loop” reality

Earley then pushes for the real-world scale: what volumes are organizations dealing with, and why have previous automation attempts fallen short?

Swetlitz focuses on large enterprises and the economics of attention. As a former operator, he argues executives don’t want “10 people sitting in a room responding to reviews,” because it’s not high-value work. The problem is that template replies or generic AI still require people to review and edit, which steals time from the customers who most need human help—especially negative experiences.

He gives a concrete before/after pattern: a company receiving “a thousand reviews a week” might have “three FTEs” focused on review handling; with automation on routine responses, they can “redeploy about two thirds” of that effort toward resolving real customer issues. He describes a shift where instead of half an FTE helping unhappy customers, the team can flip to “two and a half FTEs working with the customers that need help.” Earley reflects the intended operating model back clearly: automate the “lower hanging fruit,” and put people on “higher value interactions” and “edge conditions” that “really do need a human being involved.”

Swetlitz also ties this to measurable experience outcomes: when the company solves the customer’s problem, “they go in and they change their rating,” improving overall ratings alongside retention.

Reviews as conversion: “bottom of the funnel” behavior

A key moment in the episode is Swetlitz’s framing of reviews as a decision point: “No one’s closer to the bottom of the funnel than someone who’s reading reviews.” Earley leans into this idea and invites a measurement discussion. Swetlitz explains the behavioral cue: readers scroll reviews, glance at company responses, and usually move on—unless the response contains something truly relevant. When it does, readers “stop,” and that pause signals credibility: “this company actually cares.”

Because direct revenue attribution is hard, they discuss proxies such as “click to calls” and other click-through actions from review pages. Swetlitz shares an observed range—“around 15 to 20%” increases in click-through rate—when responses become more contextually relevant.

Review requests: personalization without the “creepy factor”

The conversation then flips to the other side of the review ecosystem: review requests. Swetlitz says generic review requests feel “robotic,” and argues personalization can drive a “pretty dramatic” lift—“30, 40%”—in conversion to completed reviews. He offers a detailed example for a personal injury law firm, where the request can responsibly reflect known, relevant case context (e.g., type of accident, a successful outcome, timing) rather than a bland “it was our honor to work with you.”

Earley immediately raises the operational and ethical concern: automation at scale can cross into “too much personal information,” creating a “creepy factor.” Swetlitz’s answer is straightforward: “there’s a human in the loop.” A staff member determines which CRM fields are acceptable and whether a given client should receive a personalized request (a “yes or no” decision).

Earley summarizes the practical benefit in a line that works well as “third-party validation” content: “It’s like a template on steroids”—you remove manual steps by giving staff a strong, context-informed draft to approve and edit.

Multi-location competition and actionable diagnosis

Finally, the episode expands from response-writing to competitive and operational diagnosis—especially for location-based businesses. Swetlitz notes that each location operates in its own competitive environment, so enterprises often focus on the “20%” of locations with problems—the “problem children,” as Earley jokes.

Swetlitz describes a competitive workflow: identify competitors, “download all the reviews,” and analyze them using the same topic/KPI approach. A particularly actionable insight is review cadence. Public profiles show total review count, but not whether the reviews arrived in the last year or the last decade. Swetlitz argues cadence matters for ranking: if a competitor gets “20 reviews a month” and you get “five,” “they’re gonna beat you every time” in search visibility. From there, topic-level comparisons (service, product quality, value) help leaders see whether the problem is operational performance, positioning, or simply insufficient review volume.

He captures the management value with a blunt contrast: without this information, “all you’re doing is yelling at the manager… ‘Do better.’” With it, leaders can set concrete targets (e.g., increase review volume to a specific monthly number) and address the specific experience dimensions where competitors are outperforming.

Earley closes by synthesizing what he finds compelling: using AI to quantify “hyper-local” customer perceptions, compare them to what the company communicates across touchpoints, and then benchmark against competitors—“doing that across… hundreds or thousands” of local customer interactions. In other words, the episode isn’t about AI making responses sound friendlier; it’s about AI making customer feedback operationally legible and commercially meaningful.

Q&A

  1. Should I respond to every review, or only negative ones?

    Swetlitz frames the practical constraint as capacity: large organizations don’t want teams spending all day on routine replies, especially when “negative reviews require some knowledge about the business” and need human handling. He implies automation can cover the high-volume, lower-risk responses while humans focus on complex negative cases and direct outreach to fix issues.

  2. How do I avoid AI replies that sound like a bot or a template?

    Earley and Swetlitz both critique “sounding human” as the wrong target if the reply isn’t useful. Earley’s standard is blunt: “Give me something that’s relevant and meaningful,” while Swetlitz warns consumers see generic responses as “just taking up space on the page.”

  3. What does contextually relevant mean in a review response?

    Swetlitz defines relevance as responding from the customer’s situation and the business’s real specifics, not merely restating the complaint or praise. Earley echoes this by saying it’s not enough to “sound human” if you’re “not addressing my problem.”

  4. How can I ground review replies in company-specific information if the model doesn’t know my business?

    Earley points out that generative AI answers from a general “model of the world” and “doesn’t necessarily know the details behind what you sell.” Swetlitz’s approach is to build a company-specific “library of facts” and then generate replies that incorporate only the facts relevant to the specific review.

  5. What’s a practical way to build a “fact library” for review responses?

    Swetlitz describes onboarding as reading the company’s “entire history of reviews,” existing responses, and the website, then extracting useful, business-specific information into “facts.” He says the company then “go[es] in and review[s] those facts” to confirm they’re valid.

  6. How many facts do I actually need to cover most review situations?

    Swetlitz gives a concrete range: “a typical, large organization will have somewhere between 20 and 30 facts,” because customers tend to talk about a limited set of recurring topics. Earley even jokes he expected “20, 30,000,” underscoring the point that review themes cluster.

  7. Why are generic AI review replies backfiring with customers?

    Swetlitz says consumers are reacting negatively when replies simply repeat the review, calling it “really irritating” because it adds “no value” and clutters the page. Earley reinforces that a response must add meaning—otherwise it fails the customer’s intent.

  8. How can responding to reviews affect conversion, not just reputation?

    Swetlitz argues review readers are decision-ready: “No one’s closer to the bottom of the funnel than someone who’s reading reviews.” He says a context-rich response can make readers “stop,” conclude “this company actually cares,” and increase the likelihood they choose that business.

  9. What’s a measurable way to see if better responses are working?

    Swetlitz suggests using behavioral proxies like “click to calls” and click-through rates from review pages, noting observed lifts “around 15 to 20%.” Earley frames this as baselining “their click through rate before” and comparing after the response quality changes.

  10. If I run a large enterprise, how does AI change staffing for review management?

    Swetlitz describes a redeployment model: rather than spending the majority of team time editing templated replies, companies can “redeploy about two thirds” of effort to solving real customer problems. Earley validates the intent: automate “lower hanging fruit” and focus humans on “higher value interactions.”

  11. Where should humans stay in the loop, even if AI is strong?

    Swetlitz is explicit that complex negatives are not ideal to automate: “No AI is gonna be able to deal with those. You don’t want AI to deal with those.” He describes humans needing to investigate, identify the customer, and “call them up and try to solve the problem.”

  12. How can review work lead to customers changing their ratings?

    Swetlitz says when a negative experience is actually resolved, “they go in and they change their rating,” which improves overall ratings while supporting retention. Earley summarizes the outcome as improved “CSAT scores” plus better allocation of effort.

  13. How can I make review requests feel less robotic and more personal?

    Swetlitz says most review requests are “very robotic,” and suggests incorporating non-sensitive, known context to make them more “personal” and “emotional.” He claims this can increase “request conversion to review rate” by “30, 40%.”

  14. How do I avoid the “creepy factor” when personalizing review requests?

    Earley raises the risk directly, warning you don’t want to use “too much personal information.” Swetlitz answers with process guardrails: a human decides which database fields are acceptable and whether a request should go out (“a yes or a no”).

  15. What should I ask customers so reviews are more informative (and more useful to readers)?

    Swetlitz suggests using what you know about the customer to pose “a question that that customer could answer” to “write a better review for the benefit of all the review readers.” Earley connects this to how clearer feedback helps customers decide and helps organizations diagnose gaps.

  16. I manage many locations—how do I identify which ones need attention first?

    Swetlitz says enterprises often focus on the “20%” of locations “they’re having a problem with,” because interpreting results still takes work. Earley labels these the “problem children,” reinforcing the idea of triaging attention instead of trying to fix everything at once.

  17. How do I compare my locations against competitors using reviews?

    Swetlitz describes downloading competitor reviews and analyzing them “in the same way” as the company’s, including calculating “how many reviews are they getting a month” and comparing topic-level performance. He argues this reveals whether you’re losing due to review cadence, service perception, product quality, or other KPIs.

  18. What’s the “future” improvement here—what will make these systems better next?

    Swetlitz says quality improves when “the smaller the ask,” describing how breaking tasks into many “agentic steps” avoids the model getting “lazy and tired.” Earley doesn’t dispute the direction; the discussion points toward more robust workflows and easier onboarding as models and automation improve.

Transcript

[00:00:00] Announcer: This is the early AI podcast hosted by Seth Early. Join us as we delve deep into the passions, expertise, and experiences of thought leaders and practitioners to talk about what's possible with artificial intelligence. The Early AI podcast is sponsored by Early Information Science, your digital transformation journey with design and deployment of innovative technology solutions, as well as Vector.

[00:00:26] Announcer: Now, enjoy the show. 

[00:00:28] Seth Earley: Well, welcome to the early AI podcast. My name is Seth Early. I'm your host. And, uh, we are going to be exploring, uh, how artificial intelligence is reshaping the way organizations manage information, create value, deliver customer, uh, better customer experiences today. We're gonna be talking about the intersection of, uh, customer engagement and customer feedback with ai, and we're going to be talking about, um, how the, uh, uh, emerging technologies can be applied, uh, to delivering more meaningful, uh, interactions and being able to get better feedback from the marketplace about our customers, their experiences.

[00:01:07] Seth Earley: Uh, the competition. What is the competition doing? So joining me today is George Switz. He's CEO and co-founder of Right Response ai. George has a lot of expertise in. Natural language technologies, enterprise AI adoption, and applying advanced models to solve real business problems. His work is focuses on building systems that go beyond generating text, uh, to really understanding how uh, customers are interacting, what kinds of feedback they're providing, and again, what that competitive landscape is looking like.

[00:01:37] Seth Earley: So, George, welcome to the show. 

[00:01:39] George Swetlitz: It's nice to be here, Seth. Thank you. 

[00:01:41] Seth Earley: So let's talk a little bit about, uh, I, I, I want to get to exactly what you do and what your company does in a minute. But, but from your, uh, your exposure in the marketplace and you're talking to organizations, what are the types of things that they're not quite understanding?

[00:01:55] Seth Earley: What are some of the big misconceptions that you're finding in the marketplace about AI and about how emerging technologies are impacting, uh, the customer experience and that whole feedback mechanism of, of, uh. Getting that customer sentiment, understanding how they're interacting, uh, and, and what their experiences really are.

[00:02:13] Seth Earley: But what are people missing or what are they not getting? 

[00:02:16] George Swetlitz: Yeah, I think what people are, you know, people know what they know and they don't know what they don't know. And so as they get exposed to things and as they get exposed to ai, they learn about what works and what doesn't work. And what's happened in this space around customer experience is that people, leaders in organizations don't really.

[00:02:33] George Swetlitz: Haven't really, they don't really understand what's the best way to leverage AI in the customer experience, customer sentiment space. And they're focused on the kinds of things that they see AI do, which is sounding human as opposed to what I counsel people, which is about being contextually relevant.

[00:02:52] George Swetlitz: Using AI to be relevant to your customers as opposed to sounding human. 

[00:02:57] Seth Earley: That's an interesting distinction, and obviously sounding human is important, but it has to be relevant, right? It doesn't matter if you're sounding human and, uh, and you, and you're, you're, you're not addressing my problem or you're not, uh, giving me any insights, but sounding human is fine.

[00:03:12] Seth Earley: But give me something that's relevant and meaningful. Tell me about how you look at this marketplace and how you look at customer experience and gimme a little bit of a, a sense of the kinds of things that you do, uh, in this environment, how you're leveraging AI and how you're impacting the customer experience.

[00:03:28] Seth Earley: A little bit of a, uh, company commercial, but not, not too, not too sales. 

[00:03:31] George Swetlitz: To build on what you just said though, you know, it's the flip side of sounding human mm-hmm. Is, you know, when you call a customer service agent and they read you back the script, well, they're human, but what you say is, well, they're robotic, they're robots, they can't help me.

[00:03:46] George Swetlitz: And so what you're trying to do with AI is get the best of both worlds. You're trying to be relevant to somebody in. In the space or in the place that they're in. Right. Kind of. That's how I view the world. And so, you know, what we do with AI is, is, is to do the things that large organizations need to be relevant to their customers.

[00:04:08] George Swetlitz: So what are large organizations looking for? They're looking for efficiency. Right. You know, most large organizations have centralized services and people struggle with centralization versus decentralization because the, the loss of quality when you bring things to the center and they also want revenue.

[00:04:27] George Swetlitz: And so we, they want revenue, right? They wanna build revenue, very 

[00:04:30] Seth Earley: important stuff. Money is hand. 

[00:04:31] George Swetlitz: Very important stuff. And so we try to help do those three things by leveraging the review ecosystem. And so, and I can talk more about how we do that, but that's essentially what we're, what we're doing for our larger organizations.

[00:04:46] Seth Earley: And so what is, what is different? Like, because people have been doing sentiment analysis for a very long time. Uh, they've used text analytics, they've, you know, used, uh, different mechanisms to kind of harvest, uh, you know, data, social media data and listen to customers in one way, shape or form. And, you know, lots of different ways of getting that voice of the customer.

[00:05:06] Seth Earley: So how is it different? What is different now? 

[00:05:08] George Swetlitz: Right. Okay. So focusing in on sentiment analysis, AI in general has upped the game with sentiment analysis by allowing the phrasing to be looked at in context. So you know, when you have sarcasm or when you have various things like that, AI does a much better job of understanding that in a review.

[00:05:28] George Swetlitz: For example, if somebody said, well, I went to. I went to this one restaurant and I was unhappy, and then I came here and was happy. Typical natural range of processing. Doesn't do a great job of saying, well, that first part is really not about the company. Mm-hmm. That I'm evaluating. AI does a really good job of that.

[00:05:45] George Swetlitz: So when we use AI to look at sentiments, we pull in the review, we break it down into phrases, we determine, and this is through a series of agentic steps. We break it down into phrases, we determine which of those phrases have to do with the customer's experience with the business itself, and then we map it to a set of topics that the customer uses as their KPIs.

[00:06:08] George Swetlitz: So it's a much more granular approach, and not to say that other people couldn't do that, but we do it as part of a larger review ecosystem, and so that we then leverage those findings in other places in our process. So we don't just use it in sentiment analysis, but we use it when we're generating a response.

[00:06:27] George Swetlitz: We use it when we're doing a monthly summary. We, we use it in a lot of different ways. 

[00:06:31] Seth Earley: And one of the things that, um, we talked about, uh, is that. A lot of, a lot of times, uh, if we're asking AI or generative ai, you know, to answer a question, it's answering that question based on its model of the world. Its understanding of, you know, uh, interactions and relationships and, you know, all the data that it's been programmed on.

[00:06:54] Seth Earley: And it's very, very impressive and it's very. Important, but it doesn't necessarily know the details behind what you sell or your value proposition or your challenges and so on, or what the answers are. Right? So it can't necessarily contextualize a very specific response. So how do you do this? And we talked a little bit about retrieval, augmented generation, and you were saying you didn't necessarily use rag.

[00:07:16] Seth Earley: But you did use some mechanism for building repositories of facts and data and information about the organization. So you wanna talk a little bit about how you contextualize the AI in company specific information? 

[00:07:29] George Swetlitz: Right. So if you, if your objective is to sound human. What you want is the best, most human-like sounding model.

[00:07:37] George Swetlitz: And as you said, what you see today in review responses that people, that that consumers are reacting negatively to is, is essentially just the repetition of the review and the response. And in a sense, what we're hearing from consumers now is. This is really irritating 'cause you're just taking up space on the page.

[00:07:58] George Swetlitz: You're adding no value to their understanding of the business. You are just taking up space. And so what we do is. We, so when somebody onboards with us, we do a series of things automatically. We read their entire history of reviews and we determine what people talk about. What are the things that people talk about that are unique to this business?

[00:08:22] George Swetlitz: And we kind of set those aside. Then we read the responses to those reviews and we read the website. We determine, are there things in the responses on the website that would be helpful and useful to a customer to know if they were reading that review and listening for a response, and we create this fact, we call 'em facts.

[00:08:45] George Swetlitz: We create this library of facts that's a very rich view of that company's kind of environment. So. In many cases, the company doesn't have that information. There's some things on the website, there's some things in the responses, but most people today don't respond with any kind of depth to reviews, and so we craft a response that we think might be relevant now, then what the company has to do is go in and review those facts and make sure that they're actually valid.

[00:09:16] George Swetlitz: But we did the hard work of determining and kind of, mm-hmm. Creating a draft from, from that point forward, every review that comes in gets analyzed in the context of those facts, and there may be. One fact, three facts, zero facts, any number of facts that are relevant to that review. And when the response is generated, those facts are incorporated into the response itself.

[00:09:40] George Swetlitz: So they, so they become, rather than a recitation of the review, they become contextually relevant. 

[00:09:46] Seth Earley: It's a more meaningful answer or response to a question that someone has, and you're not just repeating back their question and then some, uh, potentially. Less relevant response. You're really trying to fine tune that response to this.

[00:10:00] Seth Earley: So you're really doing a personalized presentation of information that's contextually relevant to them. And the particular question and the particular problem. So, so you're able to, now you're able to, and of course a good customer service rep would do that, right? But. Uh, give us some examples of kind of the scale or the volume at which organizations are facing this.

[00:10:20] Seth Earley: You know, you have smaller businesses that are, that are, you know, regional or local, and they have their challenges. You could have businesses with multiple locations. You could have large businesses that deal with a huge amount of. Responses. Give us some sense of, um, what that landscape looks like in terms of the number of responses that organizations are dealing with, how they dealt with it before this type of approach.

[00:10:43] Seth Earley: And what's some of the other attempts that automation have had, kind of where they've kind of had gaps or did not necessarily. Meet the expectations of the customer or the organization. So give us the landscape that will 

[00:10:55] George Swetlitz: Yeah, so let's focus on large organizations. So large organizations. The problem with large organizations, I used to run a large organization.

[00:11:01] George Swetlitz: The problem with large organizations is the bigger you are, the less money you actually wanna. You know, as a CEO, you're sitting there going, I don't want 10 people sitting in a room responding to reviews. It's not value add and they're a hundred percent right. And so the problem is that when you, when you, when you have a large organization, because whether it's a template of response or generic AI isn't helpful, what you have is people having.

[00:11:25] George Swetlitz: Look at each of those responses, modify them, and that's time that's not being spent helping customers who actually had a negative experience and need handholding. So what we find is that larger organizations that we work with end up redeploying about two thirds of their spend. So they're able to take people that work.

[00:11:48] George Swetlitz: So they might have. A thousand reviews a week and they're spending their time responding to those thousand and so they have three FTEs and they may be spending, you know, a half of an FTE focused on working with customers who have had a negative experience. What we find is that companies are able to redeploy so that they're spending two and a half FTEs working with the customers that need help and only a half of an FTE in that case, kind of, you know, working on reviews or response for a negative client.

[00:12:23] George Swetlitz: Mm-hmm. Because very, you know, often the negative reviews require some. Knowledge about the business, no AI is gonna be able to deal with those. You don't want AI to deal with those. And so someone has to focus on that, write the response, figure out who that customer is, call them up and try to solve the problem.

[00:12:40] George Swetlitz: And if you can have two and a half out of three versus a half out of three working on that, you end up with a much better customer experience. And that's where people really find the benefit. 

[00:12:50] Seth Earley: So one, one could argue that. The purpose of grounding this in facts and your facts about the organization and using this type of approach to really understand what that customer interaction is, what problem they're trying to solve, would be in the realm of the.

[00:13:04] Seth Earley: Of the ai, but you're saying that there's a lot of edge cases that really defy that automation and that AI approach, because they do have a lot of complexity to them. There are multiple factors. You really do need a human being involved in that. Right. And what you're suggesting, you're saying is that.

[00:13:21] Seth Earley: We're gonna take away the things that can be automated, the lower hanging fruit, and we're going to focus on that higher value interactions and those high and those more complex problems and more of those edge conditions. So can you talk about, uh, kind of an anecdote of. What a com. What an organization, and again, I'm not trying to make this about your company, but I'm trying to make it about what's a tangible kind of business case or where are people seeing value.

[00:13:44] Seth Earley: So can you talk about kind of a, a before and after, uh, with a large enterprise, what was this, the volume they were receiving, what were the challenges? What were they spending? And then, you know, after they were able to successfully automate this, build their fact base, which sounds like a knowledge base to me.

[00:14:00] Seth Earley: And it does sound like a retrieval, but that's okay. We won't have to get into too many, uh, uh, semantics, but tell me a little bit about what that before and after looked like. How, how was that impacting the business? 

[00:14:11] George Swetlitz: Yeah, so the, what I just talked about is kind of the, the outlines of that a thousand reviews a week.

[00:14:15] George Swetlitz: Right? Mm-hmm. It's kind of a kind of a, what was the scale of it with essentially three people focused on this full time. Okay. Focused mostly on dealing with the reviews and the responses as opposed to helping customers. In that particular case, they were able to redeploy a half a person focused to people on dealing with customer.

[00:14:35] George Swetlitz: Solving customer problems, which led to customers changing their ratings, right? Mm-hmm. So one of the benefits is if you have a negative review and you solve the customer's problem, they go in and they change their rating. Mm-hmm. So. That now improves your overall rating as well as having a happy customer, customer retention.

[00:14:54] Seth Earley: So you improved your, your, your CSAT scores and uh, did you actually reduce the costs or the expense or you simply redeployed people to higher value, uh, work? 

[00:15:05] George Swetlitz: Right. So there was a small cost reduction mo in most of the comp. Most of the large organizations that we deal with, there's so few resources in this area anyway.

[00:15:14] George Swetlitz: They're happy to redeploy them on actual customer issues as opposed to capturing a cost savings. But the flip side to that, the flip side to that is the revenue growth. And so when you make responses contextually relevant, right? So let me step back. No one's closer to the bottom of the funnel than someone who's reading reviews.

[00:15:35] George Swetlitz: They are making a decision. They're reading and they're trying to decide what to do. And. And, and I, I know, you know, and your, your listeners will have experienced this themselves. When they read reviews, you read down the page, you're looking at the reviews, you're glancing at the response Yeah. From the company.

[00:15:51] George Swetlitz: And if you see something in that response that's relevant, you actually stop. You're not expecting it. You stop and you read it and you go, that's interesting. And so what happens is you start to think this company actually cares. They're, they're providing something other than garbage. In this response, I'm gonna pay more attention.

[00:16:10] George Swetlitz: So they start reading down the page and they see it, and now you have a higher likelihood of being selected by that customer. So we've seen revenue. Now, it's always hard to see, but you can see it in terms of kind of click through rates to call, click to calls and things. When those percentages go up, it's essentially kind of confirming that, that, that what you used in your response is attracting more people.

[00:16:35] George Swetlitz: So there are kind of, you know, yeah. There are things that you can look at as a proxy for revenue growth 

[00:16:40] Seth Earley: and, uh, any, any data on that that you can share in terms of an increase in the click-throughs or the increase. I mean, what, what is, what are, what are you, or, or even just talk about the framework. So what are you baselining and then what are you trying to measure against again, for these proxies?

[00:16:55] George Swetlitz: Yeah. Yeah. So the baseline is kind of their click through rate before they used us, because, you know, generally speaking, you know, not a lot of people are. Are trying to be contextually relevant in their responses. And what we see is increases of around 15 to 20% in the click through rate. And so that's, you know, that's pretty substantial because, you know, yeah.

[00:17:12] George Swetlitz: And it reflects the engagement in that, in that review. 

[00:17:16] Seth Earley: Hmm. Okay. And, and so is there, are there ways to kind of measure the, the, the quality of these interactions? Is there any kind of objective measure to say, and again, this is a difficult thing, you're trying to put a quantitative measure to a qualitative problem, right?

[00:17:33] Seth Earley: Uh, are there ways to measure the quality of those responses? I guess, I guess the people, their actions, their clickthroughs would do that. Are there other objective measures, uh, to look at those? If, if you had. A thousand, uh, reviews and, and comments. Uh, for company A and a thousand reviews and comments or company B, are there, uh, objective ways to measure, uh, the relevance?

[00:17:57] Seth Earley: Not looking at click through rates, but just looking at the, at the, the text. Have you tried using, uh, uh, large language models to do that analysis or to kind of have that baseline objective measure of. Quality of response. 

[00:18:11] George Swetlitz: Yeah. So in a sense we get at that, when we do that analysis that I was talking about when we read the thousand reviews and the thousand responses to see whether the existing responses had any information in that that would be relevant to the review.

[00:18:23] George Swetlitz: Mm-hmm. And more often than not, there's nothing, the AI that we're, that we're using, the, the, the models, what they're using are not identifying useful information in responses. 

[00:18:36] Seth Earley: Talk a little bit about these fact, uh, repositories, these knowledge bases or whatever you refer to them as the, it's a repository fact.

[00:18:45] Seth Earley: So what you're looking for is. What are the things people are asking about in their reviews or what are they complaining about? What are the problems and then where are the solutions? Where are the answers to that? If somebody says, well, I couldn't find any parking, and, and on the website there's a place, well, here's our, our overflow park parking.

[00:19:01] Seth Earley: It's across the street. But they didn't see that. They didn't know that. Right. So you're, in some ways, you're surfacing the information that may already be. Present somewhere. In other cases, you're back filling that information, right? You say, oh wow, here's a real knowledge gap or information gap. People don't know this piece of information about our organization.

[00:19:19] Seth Earley: Now we're going to capture that codify and make it readily available to respond to that, but also to go back into other touchpoints. So, uh, so how large are those fact bases? Like what, what do they consist of? Can you give us a sense of, of, um, what it takes to create them and, and what they look like and the number of facts that you would have in something like this?

[00:19:39] George Swetlitz: Yeah. I mean, a typical, large organization will have somewhere between 20 and 30 facts. People tend to talk about the same thing, 20 and 

[00:19:45] Seth Earley: 30. 

[00:19:47] George Swetlitz: Three facts. 

[00:19:48] Seth Earley: Okay. 

[00:19:48] George Swetlitz: 10 to 20 gonna say 

[00:19:50] Seth Earley: 20, 30,000. 

[00:19:52] George Swetlitz: No, no, no, no, no. I mean, you know, when you think about reading reviews, people don't talk about 30,000 different things.

[00:19:57] George Swetlitz: They tend to talk about, you know, the, the most of the reviews tend to talk about, you know, 20 to 30 different topics, whether it be in a restaurant or, you know mm-hmm. A, you know, automobile business. And, and that's an interesting point, and we'll talk about, we can talk about, we can switch in a second about review requests and kind of.

[00:20:17] George Swetlitz: Where AI can be used in review requests as well, but that's what you see. You tend to see these things. You also tend to see, unlike in review responses, you do tend to see a. Kind of content or for those things. In websites, people spend a lot of money on their websites. They spend a lot of money kind of building those out and making them useful.

[00:20:38] George Swetlitz: What's happening, and it's even more so with kind of AI overviews and things like that, fewer and fewer people are going to websites, so you have to be where they are, right? And people are reading reviews. And even today, AI overviews are reading reviews and responses. And so you have to be where the world's going and the world is not necessarily going to the website.

[00:21:00] George Swetlitz: And so that's why we, we try to, you know, we. Analyze the website because of that reason. We're trying to make the point and we're trying to kind of do that light bulb thing where you say you have all of this tremendous relevant stuff on the website. None of it's making it into this review response. Why not?

[00:21:17] George Swetlitz: And the reason is, if you're a large organization, you have a small team of people are responding to reviews. Did you really expect them to incorporate all this website content into the review response? I can tell you as a former, you know, CEO of a business, I would never expect that. It's just too difficult to expect that with turnover and all these kinds of things that are happening in centralized services to expect that and AI is really good at it.

[00:21:41] Seth Earley: You had mentioned a minute ago just, uh, talking about review requests. So talk about survey AI and review requests. 

[00:21:47] George Swetlitz: Right, so the problem with review requests, it's kind of the. It's a other side of the coin when you, you know, lots of organizations send out review requests. Everybody gets them. Everybody gets a lot of them.

[00:21:58] George Swetlitz: Anytime you engage with someone, and it's the same problem, right? The problem is, is that those reviews don't connect with you as a person. They're very robotic requests, and so to the extent that you can do a couple of things. One is that you can leverage information that you have in your company's databases about that person and bring that information to bear in that review request.

[00:22:24] George Swetlitz: Make it more personal, make it more emotional. The re, the request conversion to review rate goes up pretty dramatic. 30, 40%. And it can be, it can be because of the personalized text, it can be the inclusion of a photograph. Mm-hmm. It can be the inclusion, and this is a different point of SEO related questions.

[00:22:46] Seth Earley: Mm-hmm. 

[00:22:46] George Swetlitz: So based on what you know, what's a question that that customer could answer? 

[00:22:53] Mm-hmm. 

[00:22:54] George Swetlitz: Write a better review for the benefit of all the review readers. 

[00:22:58] Seth Earley: Can you give us an example? Like what, what a, what a, um, kind of a generic request would be and then what one of these enhanced requests would be?

[00:23:07] George Swetlitz: Absolutely. So it's a personal injury law firm. Great example. They know a lot about their customers. They know what type of accident that person came to them about. Mm-hmm. They know whether they got a settlement or whether they won a court case. They know when they did this work for the customer. So a typical review request, it's gonna say, Hey Bob, it, you know, it was our honor to work with you.

[00:23:29] George Swetlitz: It would be very helpful for you if you, if for us, if you left her a response overview. With that information, the, the request can be, Hey, Bob, it was, it was an honor working with you on your motorcycle accident case, and it was wonderful that we're able to get the negotiated settlement that we did. It's been two months since we worked together, and I thought I would ask for a review that helps us get the word out and would be very meaningful to a lot of other people that connects with you in a way that another, that a generic request wouldn't.

[00:24:01] Seth Earley: And, and so how is that automated at scale? Because I'd imagine that there could be, uh, some, uh, you know, uh, a borderline conditions where you, you don't want to use too much personal information, right? It's that, that creepy factor where you're being too personal versus not personal at all. And, and, uh, so your systems would have to interact with the customer relationship management systems that would have to gather some of that customer intelligence and that relationship knowledge in order to do that.

[00:24:33] Seth Earley: So how is that actually done, and then what are the guardrails around something like that? 

[00:24:38] George Swetlitz: Yeah, so generally there's a human in the loop in those kind of things. It's typically, you know, it's, it's someone you know. Not a, a paralegal that's working on cases and doing those kind of administrative tax tasks will, you know, in their, in their database, you know, they've decided that these three or four fields are okay.

[00:24:58] George Swetlitz: So they've made the decision that those fields are okay, and then the paralegal will, they know that interaction with the client. They know whether it's relevant, you know, whether that's something that they wanna actually ask that client. And so when they're closing out the case, they'll either mark that as a yes or a no, and that request will either.

[00:25:13] George Swetlitz: Will or will not go out. 

[00:25:14] Seth Earley: Yeah. Even if you're simply beginning with the, you're, you're, you're, you're, you're eliminating a lot of steps, manual steps to the process when you're giving someone to react to, rather than say, write this from, from scratch. Right. It's like using it, it's like a template on steroids, right?

[00:25:29] Seth Earley: You're saying, right. Here's what the structure is, here are the facts, and I'm gonna be pulling from our other repositories. Now a reviewer. Just give this an okay. Or give it some edits. And then so, so how would that translate to, you know, a very large organization that is dealing with, I mean, you know, I, I'm not, this is not my area of expertise, so I'm not familiar with how often.

[00:25:51] Seth Earley: Larger enterprises are asking for reviews. Is that more of a small to mid-size business dynamic, or are you seeing that with, uh, enterprise customers as well? 

[00:26:00] George Swetlitz: We see it with enterprise customers as well. They tend to stay a little bit further away from things that could be at the boundaries. But again, if it's a large organization, they know what product was purchased, they know whether the customer's been a loyal customer, things that are not sensitive at all.

[00:26:16] George Swetlitz: Mm-hmm. Sure. That can be incorporated. In a more personalized message. And in addition, in those cases, they may lean more heavily on the questions that are asked that help the customer figure out what to write. So that notion of connecting the questions with the customer allows that review to have more, you know, to have more.

[00:26:41] George Swetlitz: Residents. Mm-hmm. To readers, but also to ai. So if AI's trying to understand what do you do and reviews are talking, let's go back to the personal injury lawyer and reviews are talking about motorcycle accidents and slips and falls. Then now AI knows this firm does motorcycle accidents and slips and falls and somebody types in, I need an attorney for a slip and fall.

[00:27:00] George Swetlitz: Right. They now know that. 

[00:27:02] Seth Earley: Do you have other examples? That would be more, because again, I can see how that would work for that kind of a, a law firm, which is fairly specialized. Any other examples for other types of products and services that would, that a large enterprise might? 

[00:27:17] George Swetlitz: Yeah. We have, you know, like, uh, property managers, we have large property managers.

[00:27:21] George Swetlitz: Mm-hmm. Our signing leases and, you know, our hosting events and all of these things can lead to review requests that hit an emotional trigger with that prospective tenant or that tenant around a kind of an emotional decision to rent in a particular facility. 

[00:27:39] Seth Earley: Okay, gotcha. Now, you would also, in we, in our preparation discussion, you had also mentioned.

[00:27:45] Seth Earley: Uh, using this approach for a competitive, uh, analysis, do you wanna just talk about, uh, how that, uh, dynamic works and, and what the, uh, approaches are for, uh, understanding the competition? Again, one of the things we had talked about is that there are going to always going to be regional differences. So let's, let's talk about sort of the competitive landscape and how that, how, how this kind of approach can be used to understand more about the competition in ways to be more, uh, uh, you know, competitive.

[00:28:15] George Swetlitz: Right. So as you just said, every location of a location based business operates in its own unique competitive environment. 

[00:28:22] Seth Earley: Mm-hmm. 

[00:28:23] George Swetlitz: Right. And, and so what we find, what we see our larger organizations doing is focusing in on the 20% of their businesses or their locations they're having a problem with.

[00:28:34] George Swetlitz: Because there's some, you know, there's some work that is involved in interpreting results, right? So you can't do this kind of thing across. Every business, right? You wanna focus in on where you actually need or where you can get the, the greatest bang for the buck. So they identify. 

[00:28:48] Seth Earley: Those are 

[00:28:48] George Swetlitz: the problem 

[00:28:49] Seth Earley: children.

[00:28:49] Seth Earley: Those are usually the problem. Children. 

[00:28:51] George Swetlitz: Children, everybody has them. Everybody has them. And so that's what you wanna focus in on. And so you're trying to understand in that case, like what is it about this location? That's causing a problem. Is it us or is it something else? Is it a competitor that's eating our lunch?

[00:29:07] George Swetlitz: And so these are things you don't know. So you can have a sense by looking at the sentiment analysis of your location as to whether or not those locations are underperforming, the average right around your kpi. That's one clue that you have just from doing your own competitive analysis. Then what we have is the ability to go in and identify your competitors in that area.

[00:29:27] Seth Earley: Mm-hmm. 

[00:29:28] George Swetlitz: And, and, and start a competitive analysis process where we go in and we download all the reviews. 

[00:29:34] Seth Earley: Mm-hmm. 

[00:29:35] George Swetlitz: And we analyze those reviews in the same way that we analyze. The company's business. 

[00:29:43] Seth Earley: Sure. 

[00:29:43] George Swetlitz: So now we know a couple of things. We know how many reviews are they getting a month? 

[00:29:48] Seth Earley: Mm-hmm. 

[00:29:48] George Swetlitz: When you go to Google and it says, this company has a thousand reviews and an average rating of 4.8, well are those thousand reviews over the last year or the last 10 years?

[00:29:58] George Swetlitz: Like nobody knows. So we go in and we kind of. Calculate that. Why does that matter? Well, if they're getting 20 a month, 20 reviews a month, and you are getting five reviews a month, you're gonna have a really hard time ranking in Google on a search. They're gonna beat you every time. So if you're spending money on advertising dollars and you are bringing people and they go, then go to Google and they see somebody else who's doing better.

[00:30:23] George Swetlitz: You are just paying for somebody else to get the business. So you gotta understand, is it my review cadence? That's a problem. 

[00:30:31] Seth Earley: Yeah. 

[00:30:31] George Swetlitz: Or you can then go look at the sentiment analysis and you can look at your KPIs and you can say, are they outperforming the on service? Are they outperforming the on product quality?

[00:30:43] George Swetlitz: Are they outperform? Where are they outperforming? Where am I outperforming them? Where's my problem? And in some cases it might be the review cadence. In other cases it might just be how you're delivering your products and services versus a competitor in your marketplace. 

[00:31:00] Seth Earley: Mm-hmm. 

[00:31:01] George Swetlitz: But you, you, if you don't have that information, all you're doing is yelling at the manager.

[00:31:06] George Swetlitz: Yeah. Do better. Why are you so terrible? What are you doing about it? As opposed to saying, look, there's two things you need to do. You're processing 40 customers a month and you're getting five reviews. You need to get 15 a month on reviews. Set that as a target, and you're losing against your competitors on their, on their perception of service value.

[00:31:28] George Swetlitz: So we need to talk about how our successful businesses are positioning that so you can do a better job. 

[00:31:33] Seth Earley: Yeah, go ahead. 

[00:31:34] George Swetlitz: Yeah, no, no. So that's, you know. As a CEO, that's what I used to struggle with. 

[00:31:40] Seth Earley: Sure. 

[00:31:40] George Swetlitz: How do I get the right information? At the right time. 

[00:31:44] Seth Earley: Yep. 

[00:31:45] George Swetlitz: To the people running that business mm-hmm.

[00:31:47] George Swetlitz: Who care. Mm-hmm. They care about the business. 

[00:31:50] Seth Earley: Sure. 

[00:31:50] George Swetlitz: But they don't, they're busy working with customers every day. They don't have time to go figure this out. 

[00:31:56] Seth Earley: Sure. 

[00:31:57] George Swetlitz: And so if we can figure it out for them, and AI provides this amazing ability to do that, we're now providing a service, a valuable service to them.

[00:32:07] Seth Earley: So I think, uh, the, the, the kind of, the takeaway is that you have, um. Uh, businesses that deal with customers on a local level, right? I mean, again, there's lots of businesses that kind of deal with the, uh, uh, the landscape of. Of customers and they deal with them kind of more homogenously. But then many times the service delivery is on the ground.

[00:32:30] Seth Earley: It's in a local area. You're dealing with people you know on a day-to-day basis, you're interacting with them. And that experience is so important to understand at a, at a very granular level, and to be able to quantify and to identify the topics and the issues that they are concerned with, that they're having problems with.

[00:32:48] Seth Earley: You're comparing that with the things that you're communicating, uh, to the client, uh, to customers around, you know, via vis-a-vis the website or email communications or onboarding processes or whatever it might be. You have this whole landscape of, of, uh, information that you're giving them, uh, throughout their life cycle.

[00:33:06] Seth Earley: So, and then what you're doing is you're saying, well, let's really understand, uh, the nuances of the problems and perceptions of those customers at a hyper-local level. And then let's also look at the competitive landscape and say, well, how is this compared? Are other our customers dealing with the same problems with other vendors, right.

[00:33:28] Seth Earley: Or other suppliers are they are other? Competitors solving these problems more effectively? Are they, are they addressing things that we're not addressing? Right? So you're really getting that kind of understanding of what the gaps are and that, uh, and that, and the differentiators at that hyper local level.

[00:33:44] Seth Earley: So that's interesting in terms of understanding that competitive landscape. Understanding how you're being perceived in the market and then doing that across, you know, hundreds or thousands of, of local, uh, uh, instantiations. Right. Localizations of your customer interactions. 

[00:34:01] George Swetlitz: Right. That's exactly right.

[00:34:03] George Swetlitz: And it's all about, you know, and we, we have customers that are. You know, we were talking about local based businesses, location-based businesses, and most of our business is location-based business. Sure. So we have, for example, a ferry booking service in Europe. Mm-hmm. That's one of our, you know, that's one of the largest in your, and, and so it's a, you know, Google Trustpilot, the App store, it's a much more vertical as opposed to horizontal business.

[00:34:27] George Swetlitz: Thousands of reviews a month, but it's, it's essentially the same problem. Have issues with the app. They have issues with the website, they have issues with these different things. And how you break it down, understand it, get the right information back to the right people at the right time and take care of your customers, right?

[00:34:45] George Swetlitz: What they found, what the, but using, so they have like 25 facts. They spend a lot of time developing their facts. Great facts, very. Rich. So if somebody complains about something about the app, they have this very sophisticated response, very informative response that gets incorporated. Customers love it, right?

[00:35:04] George Swetlitz: Really helps change the impression of what's going on. Because the problem is, is that if somebody says something wrong in a review and you don't rebut it, all the review readers think that's the truth, to engage in that ecosystem, and unfortunately. You know, most companies just don't, they, they just don't engage.

[00:35:24] Seth Earley: Yeah. Yeah, they don't. 

[00:35:26] George Swetlitz: Primarily because they don't know they can't. 

[00:35:28] Seth Earley: Okay. Yeah. So, so, or their bandwidth issues and logistic issues and tactical issues and so on. Or you might say it's, it's a dot on their day. Right? It's not that important, but. But you know, I I, and I imagine that all this, this intelligence, this market intelligence and voice of the customer obviously always goes back upstream to try to fix whatever problem you have, right?

[00:35:50] Seth Earley: So is it a lack of onboarding? Is it a lack of the right help? Does a, does a screen need to change, does a customer service response, whatever it might be, but you're going back upstream to the source of the problem and doing that remediation so that you don't get those kinds of. Responses in, in the future.

[00:36:07] Seth Earley: So that's highly valuable. And then I do like the idea of understanding the gaps in the competitive landscape to say, are we missing, uh, you know, the information, I was gonna say missing the boat, but I didn't want to do that 'cause you're a fairy example. Where are we missing information? Where are we, uh, failing the customer?

[00:36:24] Seth Earley: Where are competitors? You know, outperforming us in these different areas. So very, very valuable. And I can see, you know, how it's a, a kind of a, uh, an evolution of, uh, typical sentiment analysis because you're going a step further and you're trying to understand more of these factors in a broader way.

[00:36:41] Seth Earley: Let's just talk about where, where do you see the future of this going, and what are your thoughts about what organizations need to do to kind of get the best use from. AI in their customer experience. So where's it going and what do companies need to think about? 

[00:36:56] George Swetlitz: Yeah, I think, I think just gonna get better.

[00:36:58] George Swetlitz: And it's 

[00:36:58] Seth Earley: a bad, it's just, it's gonna go away. 

[00:37:00] George Swetlitz: It is what, what I've seen, what I, my, my experience with AI has been that from where we were at the very beginning when we started this. We have many more agents now than we had before because we find that quality is higher. The smaller the ask, right? If you ask AI to do 50 things, I'll just give you a simple example.

[00:37:21] George Swetlitz: If there are 25 facts, and we would feed 25 facts in a prompt and say which of these are relevant, it would always get the first one right, and it would always get the last one wrong, right? It just, it gets lazy and tired. But if you ask 25 different times, you get the right answer each time. And it's a tiny bit more expensive, but not that much.

[00:37:39] George Swetlitz: And so, you know, AI will get better at handling these kinds of things over time, and it'll make the system more robust. Right. That's kind of what I, that's one area that I see AI changing. I think the other thing that will make this easier is when we start onboarding people with ai. Getting these things set up, you know, we have a team, right, that like onboards people because it's one thing just to go in and give a generic AI response.

[00:38:08] George Swetlitz: It's like anybody can do that, right? It, it's easy, but when you start making it more, when you, when you make the system more robust. It's harder to set up. And so our larger enterprise customers, they'll have somebody and we have somebody and we help them. But I think longer term, as AI gets even better, we're gonna be able to do kind of AI onboarding and that will help people migrate to these systems much, much more, much, much more easily.

[00:38:32] Seth Earley: Okay, great. Well, listen, uh, George, thank you so much for your time today. I appreciate your thoughts. We'll have your information in the show notes, but where can people find you? 

[00:38:41] George Swetlitz: So right response, ai. 

[00:38:43] Seth Earley: Okay. 

[00:38:44] George Swetlitz: Dot com is our website. We have a special place, right? Response, ai slash podcast slash early ai where they can go and we have a special coupon for your listeners.

[00:38:57] George Swetlitz: Oh, okay. If they have any interest, they can book a call with me. Directly with me and they can get a coupon code to get free credits to help them learn more about the system. 

[00:39:06] Announcer: Okay. 

[00:39:07] Seth Earley: You are gonna measure our effectiveness. Great. Okay. 

[00:39:09] George Swetlitz: Exactly. 

[00:39:10] Seth Earley: Ibel, I'm a big believer in, in quantitative, uh, measures. So that's great.

[00:39:15] Seth Earley: Again, thank you so much for your time. I appreciate it. And thanks to our listeners, uh, we will see you next time. The next, uh, early AI podcast. Thank you for joining us and we'll see you next time. 

[00:39:26] George Swetlitz: Thank you, Seth. 

[00:39:28] Announcer: Thank you for joining us on another deep dive into AI innovation. Tune in next time when we introduce another industry expert and discuss how to maximize AI in your world.

[00:39:39] Announcer: The early AI podcast is sponsored by Early Information Science and Vector. To learn more, visit early, that's E-A-R-L-E-Y. Thank you for tuning in.