AI Products with George Swetiltz

Table of contents

About The Podcast

AI Products with George Swetiltz

Published
:
January 23, 2026

Host Bio

Robert Patin is the Founder and Managing Partner of Creative Agency Success, where he works alongside creative agency owners as a fractional CFO and “agency scale partner” to build simpler, more profitable service businesses with clearer decision-making and stronger operational discipline. (Creative Agency Success) After starting out in commercial photography, he moved into finance and strategy—developing his skills in small CPA firms and later leading finance inside a creative agency—experience that shapes how he helps leaders connect delivery quality to capacity, consistency, and client outcomes. (Robert Patin) He’s also a two-time international best-selling author (The Agency Blueprint and The Practical Agency), and he hosts the Agency Blueprint Podcast, focused on helping agency owners grow sustainably while reducing stress and regaining control of their time. (Creative Agency Success)

Summary

Robert Patin frames this episode around a practical lens: AI should be used to improve work, not just automate it—and the way to do that is to choose the right problem, constrain it, and build guardrails so the output stays useful and trustworthy. Early on, Robert sets the expectation that they’ll explore “how you can think about AI in the right way,” and how to “identify the right problems” so the result is better—not merely faster.

George Swetlitz joins as an operator-turned-builder who lived the problem firsthand. Robert introduces George’s background (strategy work at Sarah Lee, an MBA from Harvard, and leadership in a 220-location business) as context for why the conversation stays grounded in operational reality rather than theory.

Reviews are the conversion moment most teams underplay

A recurring theme is that reviews aren’t “brand theater”—they’re where real decisions happen. George describes review readers as being at the very end of the customer journey: “Nobody goes and reads reviews for fun… they are at the bottom of the funnel,” and he adds, “There’s no one closer to the bottom of the funnel than someone actually reading your reviews.”

That framing turns review responses into something closer to a conversion lever than a box-checking task. George argues that businesses have a choice: “You’re either gonna engage in that conversation. Or you’re gonna let two strangers talk to each other?” Robert affirms the premise with a simple “True,” but his later reactions show he’s internalizing the idea as an agency/operator opportunity.

The scale problem: consistency breaks across locations and teams

George explains the operational pain of doing this across hundreds of touchpoints. In a 220-location environment, you don’t have one person “owning reviews”—you have many managers, layers of regional oversight, and multiple communication channels. He describes trying to drive organic growth (not paying for ads) and realizing that even if the website and other marketing touchpoints were strong, prospects still “land on your reviews” after they “Google you” and start reading about you.

His point isn’t that reviews are the only thing that matters, but that they’re the moment of truth where the customer checks whether your promises hold up. When review responses are weak or inconsistent, it can undercut all the upstream effort that got the prospect there.

Why templates and generic AI fail in different ways

The episode draws a sharp distinction between:

  • Templates (predictable, repetitive, but at least clearly “templated”)
  • Generic AI (often worse, because it pretends to be human while adding no value)

George’s take is blunt: “Templates are better than generic AI,” because generic AI tends to “parrot back what’s in the review itself.” In his view, that creates responses that are “disingenuous,” and people “get turned off” because the reply doesn’t provide anything new, useful, or specific.

This is where Robert leans in. He’s interested in AI that produces utility, not AI that produces performance. That sets up the core mechanism George describes next.

The fact library: bringing the website into the response

George explains RightResponse AI’s central idea as “bring the website to the response.” He notes that teams spend “a ton of time and money building an amazing website,” full of messaging, promises, and brand value—but that “more people read reviews than visit websites.”

So instead of letting review responses become generic pleasantries, the system builds a fact library: the business’s real, approved knowledge translated into “if/then” logic. George describes it as essentially the website’s content turned into structured rules:

  • “If somebody talks about this, then let’s tell them about this element of our brand value.”
  • “If someone asks… why are we better,” you can include a specific proof point (he gives an example of a long-running local award).

When a review comes in, the system “assess[es] that review against all the facts,” chooses the relevant ones, and “build[s] those into the response.” Robert summarizes the logic back in his own words—he calls it “a logic framework”—and George confirms: “That’s exactly right.”

The key is that the response is written not only for the reviewer, but also for the silent reader evaluating the business. George returns to that buyer context: review readers are already deciding, so the reply should help them decide with real information.

Robert’s validation: review responses are creative work, not admin work

One of the most host-forward moments comes when George describes talking with agency owners who still treat review management “as a task.” George argues they haven’t connected that AI can let them be “creative in the review ecosystem,” adding real value at the decision point.

Robert picks up that thread and expands it into an agency-level critique: “You spend inordinate amount of money… to get the consumer to read a review and then if it falls flat, it was all for naught.” He’s not talking about clever copy—he’s talking about wasted acquisition effort if the “last yard” is generic.

He then reframes AI’s role: rather than replacing creativity, AI can help teams get to the best version of their thinking faster—almost like a “yes and” partner—so the output is the same or better, delivered with more consistency.

The AI-building framework: small problems, then stitch together

George’s product-building philosophy aligns with Robert’s “constraints” lens. George says, “The smaller you make the problem… the better off you are,” and he warns that “big things fail” because the model must decide what to focus on. His approach is to break the work into “many, many, many pieces” and then tie them together with a human-designed logic flow.

Robert explicitly validates this: “I will adamantly agree with you on that,” and he adds his own analogy (baking a cake) to reinforce the discipline of decomposing a big goal into solvable parts.

Guardrails and the uncanny valley: don’t try to sound human

A major tension in the episode is that businesses want responses to feel authentic, but generic AI often feels fake. George introduces the “uncanny valley” concept as a useful frame: when something is almost human but not quite, people feel uneasy.

He applies that directly to review replies. A flat AI response that defers to humans (“We’ll investigate”) is clearly non-human and can feel acceptable. But overly emotional, “thrilled/delighted” language can feel strange because readers “can see through it.” George summarizes the goal: “We’re not trying to be human. We’re trying to make the business respond in a more human-like way,” while still being clearly non-human.

Robert reacts with his own version of the point: humans struggle to be “authentically human and emotional” sometimes too—so trying to force AI into that lane can be counterproductive.

Repeatability, boundaries, and where it’s still hard

On the operational side, George highlights a CEO-level standard for shipping customer-facing AI: “It really comes down to repeatability.” He notes AI is never 100% repeatable, so the team did extensive testing to keep the output within acceptable “boundaries.”

He also calls out a specific hard edge case: medical businesses and PHI. He explains that strict interpretations can make AI refuse to respond at all because responding can imply patient status, and he says they haven’t “perfectly crack[ed] that code yet.”

The episode closes with Robert emphasizing that this isn’t about cutting headcount; it’s about doing something teams “weren’t able to do before” at scale—especially at the moment prospects are deciding.

Q&A

  1. If prospects read reviews at the last minute before buying, what should my responses actually do

Treat responses as part of the decision experience. Make them helpful for someone evaluating you right now, not just polite acknowledgments.

  1. Should I respond to reviews even when the reviewer is talking to someone else, not to me

Engaging matters because the conversation effectively becomes between the reviewer and the next prospect. Responding lets you participate instead of being absent at the decision point.

  1. Why do generic AI review replies feel so off-putting

Replies that simply restate what the reviewer already said can feel disingenuous. They add no new value, so readers interpret them as automated performance rather than real engagement.

  1. Are templates actually better than AI for review responses

Templates can be better than generic AI when they’re clearly intentional and built for specific situations. Generic AI can be worse when it produces vague, repetitive replies that pretend to be human.

  1. What makes a review response useful for someone who’s deciding whether to buy

A useful response adds relevant business information the reader doesn’t already have—details that help them evaluate what you do, how you operate, and why you’re a good choice.

  1. How can RightResponse AI avoid parroting the review back to the customer

The approach described uses a fact library: business-specific facts written in an if/then structure, so the response can include the most relevant details instead of repeating the review.

  1. What is a fact library, practically speaking

It’s the business’s real, approved knowledge translated into if/then logic—so if a review mentions a specific topic, the response can reference the most relevant facts about the business.

  1. I’ve invested a lot in my website messaging—how do I connect that to review responses

Bring your core messaging, promises, and proof points into the response itself, so review replies reflect what you stand for rather than generic thank-yous.

  1. I run a multi-location business—why does review response quality break down so fast

Scale adds operational complexity: more managers, more handoffs, and more inconsistency. That makes it difficult to execute the same standard of excellence everywhere.

  1. What’s the first step to using AI in a way that doesn’t fail

Start by shrinking the problem. Break it into the smallest solvable units, solve those parts, then connect them with a logic flow you control.

  1. Why do big AI projects fail even when the model is powerful

When you throw too much at a model, it has to decide what to focus on. The guidance here is that smaller, clearly constrained pieces perform more reliably.

  1. How do I make AI outputs feel authentic without trying to sound human

Optimize for being helpful and connected, not for sounding emotional or human-like. Avoid exaggerated language that readers can tell isn’t genuine.

  1. What is the uncanny valley problem in AI-written review replies

When a reply is almost human but not quite, it can make readers uncomfortable. Overly emotional, human-imitating language can trigger that reaction.

  1. Should I disclose that an AI wrote the review response

Not directly addressed here as a best practice. The conversation notes that explicitly labeling responses as AI could be tested, but also that it may feel strange in practice.

  1. How do I decide when an AI tool is ready to use live with customers

Repeatability is the standard described. You want confidence the system stays within boundaries you’re comfortable with, which requires extensive testing.

  1. What’s a real example of how “if/then” logic helps review responses

If a reviewer mentions a specific friction point (like parking), the response can include a concrete, relevant detail that helps future customers navigate that issue.

  1. Why should agencies treat review management as creative work

Because reviews sit at a high-leverage moment where a prospect is deciding. Shaping that moment with helpful, business-specific responses can add real value beyond “task completion.”

  1. What makes AI especially complicated for medical businesses responding to reviews

PHI constraints can block responses because acknowledging someone as a patient can be considered sensitive. That requires stricter guardrails and careful handling.

Transcript

[00:00:00] George Swetlitz: The response is thoughtful, and you've built those responses to be helpful and useful for people who are not just, who are your prospective customers who are making that buying decision right now? I mean, nobody goes and reads reviews for fun when they come to your review, right? They are at the bottom of the funnel.

[00:00:18] George Swetlitz: There's no one closer to the bottom of the funnel than someone actually reading your reviews.

[00:00:23] Robert Patin: True.

[00:00:23] George Swetlitz: And you could, you have a choice to make. You're either gonna engage in that conversation. Or you're gonna let two strangers talk to each other?

[00:00:43] Robert Patin: Everyone. Today's guest is George Swetlitz, an operator turned builder who knows what it takes to solve problems at scale. George led strategy at Sarah Lee and earned his MBA from Harvard. And then later ran a 220 location business. Where he saw firsthand how hard it is to deliver consistent results across hundreds of touchpoints.

[00:01:01] Robert Patin: When I, when AI tools like chat team emerge, George set out to replace people, he set out to solve one specific challenge better than humans could do it truly at scale. By narrowing the problem, setting clear constraints and embedding human context into the systems, he discovered how AI can deliver. Not just speed, but quality and authenticity.

[00:01:20] Robert Patin: So in the conversation today, we're gonna be exploring how you can think about AI in the right way, how to identify the right problems, constrain them to solving a problem, and build solutions that improve, not just automate work. Welcome to the show, George.

[00:01:34] George Swetlitz: Great to be here.

[00:01:35] Robert Patin: So let's, let's kinda start with your journey, if you will.

[00:01:39] Robert Patin: I, I know that you've had quite a unique career path, and so you've gone from Sarah Lee and your MBA and the business and then now, um, now RightResponse AI. So can you talk me through a little bit of like some of the learning factors that you've had throughout that, that course of time in the 200 location business as well, and kind of what you saw?

[00:01:58] George Swetlitz: Yeah, yeah. So. You know, working in a 220 location business was completely different than anything I had done before. Mm-hmm. And, and part of the challenge was trying to generate organic growth, not through paid ads, not through social, but trying to get people to convert when they come to your site.

[00:02:20] George Swetlitz: Convert when they come to your location. And, and that was really the gold standard because you, you're driving growth and you're not paying for that growth. And so we spent a lot of time thinking about how do you do that? And of course your website is critical. The quality of all of those things that you do from a marketing perspective are critical.

[00:02:42] George Swetlitz: Mm-hmm. But at a certain point, the customer lands on your. Reviews, they, they Google you. They search and they find, they find you and they read about you. And we were finding that we weren't good there. We were good everywhere else, but we weren't good there. And there were a lot of factors for that when you think about doing that at scale.

[00:03:07] George Swetlitz: But it applies to any, any business, large or small. And so we struggled with that. This was before. AI before chat, pt, before all of these tools existed and we struggled with the platforms that were in place, the the way that we were organized. For example, you know, you have 220 locations with 220 managers and regional managers and call centers, and how do you execute on excellence on that side of the business so that all the work that you did driving people to your location.

[00:03:41] George Swetlitz: Wasn't for Naugh. And that was very, very difficult. And we ended up exiting the business in, in 21. And, uh, in 22 Chat, GPT came out. And I thought this would've been incredibly helpful for us at that time. This might've been the path to solving that. Mm-hmm. And so I, I brought a team together and we decided to tackle that, that problem.

[00:04:06] George Swetlitz: And that led to RightResponse AI. And it's been a lot of fun because. Starting a software business, starting a platform had always been something that I had thought about.

[00:04:16] Robert Patin: That's cool. And then when you were like, thinking about this problem that you had in the 220 person business, which I mean, frankly, the reality of the world today where people are wanting to make sure that they're making the right choice of like, you know, hey, considering the internet and everything that's going on, like there's a lot more kinda Machiavellian oriented people out there.

[00:04:33] Robert Patin: Like, how can I trust this person or this business, uh, with my money, really? And so those reviews are, you know, critical, the. Reality is, is that there's loads of different reputation management softwares and platforms out there. So what exactly were you seeing kind of missing in this space? That you were looking to solve differently?

[00:04:49] George Swetlitz: Right. So let, let's think about the history of review response, right? There's, there's one element of it is re there's review requests. There's analysis, but there's review responses. So let's focus on that. So originally people would write review responses, they would go in and they would physically write them and everything got started.

[00:05:06] George Swetlitz: That's what you would do? Yeah. And of course that's great when you have a couple locations much tougher at scale.

[00:05:12] Robert Patin: Sure.

[00:05:12] George Swetlitz: Then platforms came out and those platforms were really around. Templates and at that time, and they're still, they still exist. Some of the platforms still have templates, but of course they're very repetitive and.

[00:05:24] George Swetlitz: People now recognize that instantly I would, I would personally say that templates are better than generic AI because it's clear that they're templates and you can build templates to solve specific problems. But then chat GT comes out generic, AI comes out, and this is viewed as something better. But in fact, it's not better, in my view, is it's much worse because it is just essentially parroting back.

[00:05:50] George Swetlitz: What's in the review itself? There's nothing new. There's nothing useful. There's nothing helpful for anybody. And I find when you talk, when you do research and talk to people, they're, they get turned off by these generic AI responses that are pretending to be human, but are not. They're very disingenuous and, and so what we.

[00:06:12] George Swetlitz: Set the, the task as is, how do we use AI to essentially come putting my marketing hat on, bring the website to the response.

[00:06:23] Robert Patin: Hmm.

[00:06:23] George Swetlitz: And my old job, we spent a ton of time and money building an amazing website.

[00:06:28] Robert Patin: Hmm.

[00:06:28] George Swetlitz: With all sorts of messaging and promises and brand value. Mm-hmm. The reality is more people.

[00:06:35] George Swetlitz: Read reviews and visit websites. And so how do you bring all of the content that's in that website to the response itself? Mm-hmm. And what we do with RightResponse AI is we build what we call a fact library, and it's essentially the, the content of your website in an if then form. Mm. If somebody talks about this, then let's tell them about.

[00:07:01] George Swetlitz: This e element of our brand value. Mm. If someone asks the question about why are we better, let's tell them that we've won this award in the local community for 10 years straight.

[00:07:11] Robert Patin: Mm.

[00:07:11] George Swetlitz: And so we build this fact library, and then when a review comes in, we assess that review against all the facts and the fact library determine which ones are relevant and build those into the, the response.

[00:07:23] Robert Patin: Got it. So you're effectively giving AI a logic framework by which it's supposed to be able to function, so that you're actually able to have a bit more of what would be a genuine oriented response that would have a utility to the consumer of that review that allows for them to make a buying decision

[00:07:41] George Swetlitz: that's right.

[00:07:42] George Swetlitz: That's exactly right. So you're, you're responding to the person who wrote the review in a thoughtful way. Mm-hmm. The response is thoughtful, and you've built those responses to be helpful and useful for people who are not just who are your prospective customers who are. Making that buying decision right now.

[00:08:01] George Swetlitz: I mean, nobody goes and reads reviews for fun when they come to your review, right? They are at the bottom of the funnel. There's no one closer to the bottom of the funnel than someone actually reading your reviews.

[00:08:12] Robert Patin: True.

[00:08:12] George Swetlitz: And you could, you have a choice to make. You're either gonna engage in that conversation.

[00:08:16] George Swetlitz: Or you're gonna let two strangers talk to each other, right. The person, the, you know, the person who wrote the review and the person who's reading them are having this, you know, they're, they're engaging with each other and you are completely out of the

[00:08:28] Robert Patin: Yep.

[00:08:28] George Swetlitz: You know, the conversation.

[00:08:29] Robert Patin: Yeah. So, you know, in the kind of AI landscape, and I think that this is interesting in the, kind of the premise of the conversation today is you've taken this experience that you have.

[00:08:40] Robert Patin: Throughout your career that you've built up, you solved a problem for your 220 location business and like, how exactly do I go about this? And then now looked at, all right, we have this new technology that's emerging that we are able to do, but then you apply the actual implementation of that and realize it doesn't quite work exactly as well as if I were to have done this myself.

[00:08:59] Robert Patin: So how do I actually create that utility? So the, the thing that I wanted to kind of dive a little bit more into now is. The, how do you think about your experience in your career, AI as a utility and then a problem that the market ultimately has and kind of think about how exactly do I go about what is a good fit problem for my experience plus AI that then allows for the market to be able to have a problem solved that's needs to be solved.

[00:09:24] George Swetlitz: So I think, you know, part of it is, you know, when you're trying to use ai, I found that the smaller you make. The problem. Right. The better off you are. People try to do big things with ai and I found, at least through our own experience, that big things fail.

[00:09:41] Robert Patin: Mm-hmm.

[00:09:42] George Swetlitz: Because essentially when you, when you throw enough into an AI model, it has to decide what it's going to work on.

[00:09:50] George Swetlitz: And these, you know, these. Models cost a lot of money, and they're, and the, uh, the companies are trying to economize and so they do things to kind of, to constrain their own costs, and part of that is how it solves problems. And so we found success by. Taking this problem and breaking it down into many, many, many pieces, and then genically tying them together.

[00:10:14] George Swetlitz: And so I think that when any, when anybody has a problem that they wanna solve with ai, the first thing that you wanna try to do is break it down into, into the smallest pieces that you can before solving and then stringing them together in your own logic flow.

[00:10:29] Robert Patin: Hmm. I will adamantly agree with you on that.

[00:10:33] Robert Patin: I mean, I think you can still solve big problems with ai, but I think that you have to create all the smaller pieces to like, create, it's like the, the idea of like, I'm gonna go bake a cake. I don't like, I have to realize that there's a bunch of different ingredients that I have to put into it to actually do so, and if I.

[00:10:45] Robert Patin: The being able to actually break it down into smaller, logical pieces the same way that you would do yourself if you were solving the problem. Right. So let's say that I was, from a research perspective, I'm trying to determine what is the white space in, um, in the market for a specific brand. I'd wanna break it down into like what is the competitive set?

[00:11:03] Robert Patin: What is the ice, the I. And the ICP, what exactly is the current positioning? Like what are all these little pieces that I'm taking in aggregate? So like go and collect all of those pieces of information and then start to ana, do the analysis across it. So like what exactly the same way that your mind is thinking through it is vital if you allow, you don't actually take the logic points that you've built up over your career, which I think is the core of what I'm hoping that listeners can take out of this is that.

[00:11:29] Robert Patin: You have built throughout the course of your career, personally and listeners as well. They've built a segment of expertise that they've built over the course of their career. And if you take the logic criteria that you've taken in the example of yours, that 220 location business, we have people at the bottom of funnel that are wanting to go to the location.

[00:11:49] Robert Patin: They're trying to gather information to be able to make a buying decision, and the more authentic of a response we give. The better of a buyer that we have crossing through to the location. And so you're taking the logic, same criteria, and then just transforming that into an automated AI oriented version.

[00:12:05] Robert Patin: But if you try to think about it of like, okay, I just need to respond to the review, you would've ended up with that kind of boilerplate oriented responses without all the logic, the step-by-step criteria that you were taking before. Right.

[00:12:18] George Swetlitz: Right. I definitely agree with your point. I think, you know, I think an interesting, when I talk to, so we have an agency portal where we, you know, we've structured, we've structured a, a way of operating for agencies and make it easy for agencies to use our platform.

[00:12:30] George Swetlitz: And when I talk to agency owners, normally what I talk, they approach me because there, there may be some feature that's missing in the platform that they're using or they think we might be lower cost or whatever it's, but they still view. This area as a task.

[00:12:47] Robert Patin: Mm.

[00:12:48] George Swetlitz: Right. It's like a, it's a review management as a task that they're doing for a customer.

[00:12:54] George Swetlitz: They, they haven't yet mo and I'm sure some have, but many people that I talk to have not yet realized or kind of put the pieces together that AI allows them to be creative in the review ecosystem. Right. They view like there's this creative area that I work on and then there's this. Task stuff that I have to do.

[00:13:17] Robert Patin: Mm-hmm.

[00:13:17] George Swetlitz: And I haven't yet made that connection yet that the review piece is actually a creative piece and they can add a lot of value in that piece if they can, if they can, to grab the customer that they've worked so hard to find

[00:13:32] Robert Patin: you. You spend inordinate amount of money, ad spend dollars, content creation, all the things to get.

[00:13:37] Robert Patin: The consumer to read a review and then if it falls flat, it was all for naught. Right. I, the, the thing that I, I think is kind interesting that you're, you're describing a little bit in there too, is like you have a perception, not you specifically, but society has this perception right now that AI is either one, not a, doesn't have a utility in, in creative two, that it is going to, it's either gonna replace creative or doesn't have a place in it.

[00:14:05] Robert Patin: And I, I think it's quite interesting. I mean, the reality of how I see the world is I think that it's. The same way that you would end up taking and having a kind of yes and oriented conversation. So like, how do you take what it is that you're doing, get to the same output or better than you would've done faster, right?

[00:14:22] Robert Patin: So how do you get to the true, valuable components of what it is that you're doing as quickly as you possibly can? So the, the, the question I have for you is like, how are you thinking about. Utilizing and how did you think about utilizing AI to actually deliver a better output than what humans would've been able to consistently do?

[00:14:37] Robert Patin: So like you have your logic framework, you have all of those pieces, but how did you actually start to think about, all right, here's the output of what a person would do actioning this versus the implementation of what you did with RightResponse AI? Yeah.

[00:14:48] George Swetlitz: Well, I mean, it goes back to when, when, when I was leading this, you know, alpaca this business.

[00:14:53] George Swetlitz: When you talk to customers, they, you know, and you interact with them. They talk to you. They tell you things they liked and didn't like, and then you respond to them in a very real way. Right. And, and, and what we realized as a team was that we weren't doing that when, when they were telling us things in public.

[00:15:12] George Swetlitz: We weren't doing that. And so the challenge was how do you interact with people in public, the way that you interact with them in private? And it's, have you ever heard of this concept called the Uncanny Valley? Have you heard? I haven't. I haven't either. And a, a. Prospective client told me about this not so long ago, and the uncanny valley was, is it was a, a robotics guy in Japan in the seventies and he talked about the valley is between kind of robotics that are stylistic.

[00:15:40] George Swetlitz: So you think about like a cartoon robot that is clearly a robot and. Human qualities.

[00:15:46] Robert Patin: Mm-hmm.

[00:15:47] George Swetlitz: And it's endearing. And his, it, what he had seen was that as you got closer to human-like robotics, when it got to the point where it was almost human-like, but not quite, people fell off. They, they got, they got weirded out by it.

[00:16:03] Robert Patin: Hmm.

[00:16:03] George Swetlitz: Because it was close, but not quite there. And then when it became, when you had a robot that was, you know, a hundred percent that. It, you, you, you would feel for it again, right. You could communicate with it.

[00:16:15] Robert Patin: Mm-hmm.

[00:16:15] George Swetlitz: And there was a guy at, at MIT who just did some research and he, he had badly trained ai, well-trained AI and, and a human doing a task.

[00:16:24] George Swetlitz: Mm-hmm. People that, you know, were looking at the badly trained AI were very turned off.

[00:16:29] Robert Patin: Mm-hmm.

[00:16:29] George Swetlitz: The well-trained ai, sometimes people thought it was a human, like sometimes they missed to quit the human for ai, right?

[00:16:35] Robert Patin: Mm-hmm.

[00:16:36] George Swetlitz: But this notion. Of trying to be human, but not quite doing it turns people off.

[00:16:43] George Swetlitz: That's what he saw in the seventies, and I think that's happening again with ai. Yeah. You have this, like, you can have an AI response that says, Hey, thanks for letting us know. We'll investigate this. It's not attempting to be human. It's clearly a response that's deferring to the human. Mm-hmm. Right. Or you can have AI that says, oh, we're thrilled that you were delighted.

[00:17:07] George Swetlitz: And you know, we just thought this is is strange. And because it's not re because you can see through it that it's not really human and therefore it turns you off. It's the same uncanny valley problem.

[00:17:20] Robert Patin: Mm.

[00:17:21] George Swetlitz: With ai. And that I think is what. We're not trying to be human. We're trying to make the business respond in a more human-like way.

[00:17:34] George Swetlitz: Mm-hmm. While it's clearly not human. Right. And that's where people have comfort. Like they're more, they're more comfortable in that environment.

[00:17:43] Robert Patin: And so with your implement it is, um, and I think that that's, that I feel like that's creating for an aspect of like the guardrails around how to utilize ai, which is something that I was hoping to dive into.

[00:17:54] Robert Patin: Yeah. But, so let, let me ask this question though, for you specifically. Are you owning and specifically saying that it's AI in the responses then?

[00:18:02] George Swetlitz: Well, we don't, because that would be kind of weird too. You know, you don't really have a, I mean, you don't really have a way to, you know, unless you kind of at the bottom like responded to ai, ai, you know, you could do that.

[00:18:15] George Swetlitz: It's interesting. We could test that actually. And it's not a bad idea to do that, especially if you responses are very helpful.

[00:18:21] Robert Patin: Yeah.

[00:18:22] George Swetlitz: Right. But my, my, yeah, my. You know, there. And so there, there, there are these ethical problems, you know, these ethical questions that people wrestle with all the time. Sure, sure.

[00:18:30] George Swetlitz: But I think where we're trying to be human is just by trying to be helpful, right? Trying to have the business sound more connected without trying to pretend to have all the emotion that comes off in these kind of, you know, silly generic AI responses.

[00:18:44] Robert Patin: I think it's that, I mean, I think that people sometimes have the problem with being authentically human and emotional as well.

[00:18:49] Robert Patin: So trying to make AI be that I think is funny. Right. Um. I do think that the uncanny Valley conversation is a great framework in the idea of like, what exactly are the guardrails around where AI can belong, where, where it should live and not live, at least in the modern versions. Today's versions of what we have, the thing that I'm curious about is, so let's say that I'm a listener to the episode right now and I'm like thinking about AI implementation at the moment, and I'm thinking about utilizing an AI tool, and there's tons of them out there in the course of us recording this episode.

[00:19:19] Robert Patin: There's probably three more that were just launched or created. Right? And so there's the ability for us to create our own, the ability for us to be watching out for things. How do you think about, and when you were starting RightResponse AI, how did you think about the first steps in implement in implementation of AI in this specific direction?

[00:19:40] Robert Patin: Like how did you plan that out?

[00:19:41] George Swetlitz: Well, it, well, we started, the issue was can AI in this particular space. Can we make it useful?

[00:19:49] Robert Patin: Hmm.

[00:19:49] George Swetlitz: That was the question. And we had to test it for a while. And honestly it's so much better today than it is when we launched. It continues to get better. So for example, today in our fact library, it's a single if then if the customer mentions difficulty parking, finding parking, tell them that the parking garage is in the back of the building, right?

[00:20:07] George Swetlitz: Mm-hmm. And so, you know, that's an if then where a single, you know, a single then. Makes sense.

[00:20:12] Robert Patin: Yep.

[00:20:13] George Swetlitz: But if somebody mentioned something about how they love the way the technician engaged with them, there are actually a lot of things that you could say. You could talk about the training that they go through.

[00:20:24] George Swetlitz: You could talk about awards they've gotten, you could talk about a lot of things.

[00:20:27] Robert Patin: Mm-hmm.

[00:20:27] George Swetlitz: And so one of the things we're building now is multiple then. So expanding on this base concept, but giving the, giving the marketers the ability to say, here is an entire range of things that I would like to talk about.

[00:20:43] George Swetlitz: And all of this just, you know, every time that you add things, it just makes AI more complicated, right? It just makes the, and so what we can do now is much, is a much heavier lift than what we could do a year, even a year ago, but that's how we thought about it.

[00:20:59] Robert Patin: Hmm.

[00:20:59] George Swetlitz: And what are, you know, where do we run into issues?

[00:21:02] George Swetlitz: So for, I'll give you an example. Where we still run into issues is PHI for medical businesses. Hmm. Right? I mean, technically, I mean, if you say ai, it's a medical business. We wanna follow all the regulations around PHI. Mm-hmm. Personal health information. AI will literally say nothing because technically you can't even respond.

[00:21:21] George Swetlitz: Like if, if you're a medical, because by responding, you are admitting they're a patient. I mean, technically that's the case. You can't even respond right now. People do respond. They actually do. So trying to, to explain to the AI that we wanna, we wanna be guarded around PHI, but we wanna do it in a more lenient way, and we have not.

[00:21:43] George Swetlitz: Perfectly crack that code yet because it, it's very complicated.

[00:21:48] Robert Patin: I mean, yeah, it's like asking a math equation to then consider nuance. Right. Which is very difficult. That's, that's an interesting use case from like a testing experience piece. That's right. So like, but what was the, the, the phase for you to, on, you've gone through all this testing, this ideation, you figured out that yes, alright, it can, it can solve this problem.

[00:22:06] Robert Patin: But the reality is, is like, you know, as you're fin with medical oriented businesses, it still has this little bit of a challenge. So there. A place, I'm sure that you got to like it's good enough now to launch and get into to actually utilizing it live there in the public and in the market. At what point did you start, like were you able to make that choice?

[00:22:23] Robert Patin: Like what were the determining factors that made you choose like, yeah, all right, let's start to test this out with live businesses.

[00:22:28] George Swetlitz: Yeah. Well, it's really comes down to repeatability. And so when you're as, as a former CEO, you wanna have confidence that a tool that you're using is repeatable. Mm. And a, a lot of, you know, startups, tech bro startups, they don't think about that.

[00:22:44] George Swetlitz: They just think about getting, you know, getting the market. It's just do stuff. But coming from the background that I came from, it had to be repeatable. And of course, AI's a challenge because it's never like, it's never. Repeatable a hundred percent. It's always slightly different. Yeah. And so we just did an, just an incredible amount of testing to make sure that it stayed within.

[00:23:03] George Swetlitz: Boundaries that we were comfortable with.

[00:23:05] Robert Patin: Margin of error. Yeah,

[00:23:06] George Swetlitz: yeah, yeah, yeah,

[00:23:07] Robert Patin: that makes

[00:23:07] George Swetlitz: sense. And, and that's what gave us confidence to, to move forward.

[00:23:10] Robert Patin: You know, I think that there's one, I think that the way that you've applied AI with your, with your business, with RightResponse AI is just really intriguing and such a creative.

[00:23:20] Robert Patin: Amazing way to have gone through this implementation of AI and exactly as you're talking about, and you were mentioning earlier in the episode of bringing it down into that like smallest, solvable unit and how you're thinking about the application from your logic that's being put in that if then statement kind of thing.

[00:23:35] Robert Patin: There's so much. That I think any utility, both from the platform that you've created as usefulness to listeners for their clients in IMP implementing it, but also just in the framework by which to apply AI to their business and how they're gonna start to live in this new reality that we all have found ourselves in and AI's here in perpetuity now.

[00:23:56] Robert Patin: Like it's here to stay and it's time that we start to look and start to adapt into this world. And I think it's just an interesting way the guardrails. The solvable unit, the logic that you've applied with RightResponse AI is, is really a truly useful utility for listeners. Yeah,

[00:24:09] George Swetlitz: yeah. No, it's, it's interesting too because in this particular space, it's not about reducing headcount, getting rid of people.

[00:24:18] George Swetlitz: It's about doing something that you weren't able to do before.

[00:24:21] Robert Patin: Hmm.

[00:24:21] George Swetlitz: People have not been able to respond in a helpful way at scale. Yep. Right. So we did that and then people came to us and said, well, can you apply this to review requests? So we started working on a review requester that integrate, that used AI to integrate things about the customer that the business knows.

[00:24:41] George Swetlitz: Mm-hmm. To create a stronger emotional connection in that request so that the conversion rate. Requests to review goes up.

[00:24:50] Robert Patin: Yep.

[00:24:50] George Swetlitz: And so we, we launched that a couple months ago and it's been, you know, very, uh, successful.

[00:24:55] Robert Patin: I've really enjoyed the conversation today, and I mean, the fact that you continue to innovate and continue to adapt in the market is just incredible.

[00:25:01] Robert Patin: I know that you're offering listeners some credits to test out the platform for their clients. If you can share a little bit about that and also how to get in touch with you, George.

[00:25:08] George Swetlitz: Yeah, so for your listeners, if you follow the link that's down in the show notes, uh, they'll be able to get 3000 credits versus a thousand that.

[00:25:16] George Swetlitz: Or a regular free trial free Trialers again.

[00:25:19] Robert Patin: Awesome. So we'll go ahead and include the link to that in, in the show notes, everybody, so you can go ahead and take advantage of those 3000 credits as well as, uh, links to George's LinkedIn page if you're wanting to get in touch with them directly. I just, I find your journey in RightResponse AI and your career journey quite inspiring and exciting of how it is you've taken AI and applied it to a problem that I, I, I think that at least any retail oriented business is having and applying it in a.

[00:25:44] Robert Patin: Not in an authentic way that is creating for a better relationship with the consumer. I just think it's really fun and a great application of it that I hadn't even considered. Thank you for joining today.

[00:25:56] George Swetlitz: Yep. Great. Nice to be here. Thank you.

[00:25:58] Robert Patin: Hey everyone. Thank you so much for listening in today. With all of the shifts in the industry, it is easy to feel overwhelmed or unsure of your next move.

[00:26:05] Robert Patin: So I'm not here to try to sell you anything. I just wanna give back and guide you through the uncertainty. Whether you're navigating new trends or just need some clarity, we are here to listen and offer advice. Head to creative agency success.com/chat to book a free 50 minute chat with an agency mentor who can provide you with a fresh perspective.

[00:26:25] Robert Patin: Let's figure out the best steps for your agency's growth together.