1 hours 6 minutes 3 seconds
🇬🇧 English
Speaker 1
00:00
If you're a regular listener of my podcast, I have another recommendation for you. The podcast is called Founders, and it's hosted by David Senra, who has devoted his life to learning from history's greatest entrepreneurs. Each week, David distills the lessons of a different founder, from Henry Ford to Coco Chanel to Edwin Land. His recent episode on James Cameron is particularly excellent if you're looking for somewhere to start.
Speaker 1
00:22
I guarantee if you listen to Founders, you'll feel inspired to level up whatever you're working on. You can find a link to Founders in the show notes of this conversation. And because Founders is part of the Colossus Network, you can search all the past transcripts on our website at joincolossus.com. This episode is brought to you by Tegas, the modern research platform for leading investors and the provider of Catalyst.
Speaker 1
00:44
Tired of calculating fully diluted shares outstanding? Access every publicly reported data point and industry specific KPI through their database of over 4, 000 drivable global models, hand-built by a team of sector focused analysts, 35 plus industry comp sheets, and Excel add-ins that let you use their industry-leading data in your own spreadsheets. Tegas's models automatically update each quarter, including hard-to-calculate KPIs like stock-based comp and organic growth rates, empowering investors to bypass the friction of sourcing, building, and updating models. Make efficiency your competitive advantage and take back your time today.
Speaker 1
01:18
As a listener, you can trial Catalyst by TGIS for free by visiting tgis.co.patrick. Hello and welcome everyone. I'm Patrick O'Shaughnessy and this is Invest Like the Best. This show is an open-ended exploration of markets, ideas, stories, and strategies that will help you better invest both your time and your money.
Speaker 1
01:39
Invest Like the Best is part of the Colossus family of podcasts, and you can access all our podcasts, including edited transcripts, show notes, and other resources to keep learning at joincolossus.com.
Speaker 2
01:52
Patrick O'Shaughnessy is the CEO and founding partner of Positive Sum and the CEO of O'Shaughnessy Asset Management. All opinions expressed by Patrick and podcast guests are solely their own opinions and do not reflect the opinion of Positive Sum or O'Shaughnessy Asset Management. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions.
Speaker 2
02:15
Clients of Positive Sum or O'Shaughnessy Asset Management may maintain positions in the securities discussed in this podcast.
Speaker 1
02:24
My guest today is Des Trainer. Des is the co-founder and chief strategy officer of Intercom, a customer service solution that helps businesses answer product questions, offer instant support, and automate sales. The business was founded in 2011 and their products operate with 25, 000 businesses, including the likes of Amazon, Lyft, and Atlassian.
Speaker 1
02:42
Our conversation is roughly split in half. First, we talk about AI and how it's actually changing businesses like Intercom through products such as their open AI powered bot called FIN. We then talk about DES's views as an investor, which includes an answer about software that I'll remember for a long time. Please enjoy my conversation with DES Trainer.
Speaker 3
03:04
I've been so excited for this because I think it's the perfect excuse to talk to a real practitioner, builder, entrepreneur about what it's been like to feel the explosion of AI and its impact, both exciting, maybe even scary, on an existing product, an existing software product that's always been very cutting edge in terms of its architecture, et cetera, and how you approach this new opportunity slash threat. Everyone, I think, is bomb that went off in the technology scene. And I'd love you to begin by just giving me your high level thoughts on what it's been like to see chat GPT, GPT-4, all the open source models, all this explosion of new technology through the lens of an existing really successful software business.
Speaker 2
03:48
It's been a good roller coaster. The fun 1. We have dabbled in AI since 2016.
Speaker 2
03:53
We've had AI products live in market. I think the world changed with chat GPT, both capabilities of what AI could do change. And also I think people's willingness to engage with a bot was a lot different after their first experience of chat GPT. For us, I met with our head of ML, I think the day after chat GPT dropped and we had a pretty long conversation where he was pretty firm that this is the single biggest transformation he's seen in AI and he's spent his entire career in it.
Speaker 2
04:20
The more we played with what we realized was, Hey, this thing is very conversational. It can learn, it can summarize, it can extract the main pieces and given a query and given a set of information, it can actually propose an answer. And that is basically what customer support is. So the implications for customer support and intercoms were customer service platform, the implications are pretty obvious.
Speaker 2
04:41
It sounded probably more like a prophecy at the time, but I think looking back, everyone's like, yeah, no, it's pretty obvious that the entire world of customer support was going to change. So the question was, how quickly can we do it? We downed tools, a lot of projects. It wasn't quite code red, but it was as close to a can be from an offensive point of view.
Speaker 2
04:58
It was let's go hard AI. The team worked insanely hard and we produced our first release. Well, we had our first release in beta in December. And then we talked about it publicly in January.
Speaker 2
05:09
And that was all about assisting the customer support rep, because we weren't yet sure that the tech would actually be good enough to face end users, but we knew it was definitely good enough to augment the customer support agent. GPT-4 was on its way around then. As soon as we got our hands on that, that was where we were able to say, Hey, we can constrain this agent to not go off topic, to not take opinions about whatever politics or to not recommend your competitors or to not do anything you wouldn't want a support rep to do. And we can also, we believe, curtail the vast majority of the hallucinations by basically with the right amount of prompting.
Speaker 2
05:43
So we started to play with that. We launched that in beta in March. Then we went to general availability, I think in late May or early June. And that's been a wild ride.
Speaker 2
05:53
Just seeing how the bot almost always is outperforming what our expectations of it are in terms of like the complexity, the nuance, customers will send us 7 questions, not knowing they're talking to a bot, 7 questions would nested. If this, then how do I dot? Then just blitz is true. Genuine conversations that might've taken a support rep an hour to aggregate all the information for Fin is just blitzing it.
Speaker 2
06:16
And there are some shocking stats. We've seen customers see 50% of their support volume drop. We've seen on average, most customers who turn Finn on with no other work, literally clicking an on button are getting
Speaker 1
06:26
15, 20, 25%
Speaker 2
06:28
of their support volume just going away straight away. And it's just been crazy to see a product where we knew we had done all the right stuff on our side, but you're still crossing your fingers going, I hope it works well in the aviation industry. Sure enough, it does.
Speaker 2
06:43
So it's been wild to see it up front And we've continued, and obviously we can talk more about it, but yeah, we're now asking ourselves what's the next tier of this? How do we make it more powerful? How do we make it interoperate with humans better, et cetera.
Speaker 3
06:54
Do you think that this technology is actually more powerful for existing companies that have prebuilt products and big data stores and big teams, et cetera, versus de novo startups that are trying to, let's say for example, someone wanted to build a from scratch intercom competitor starting tomorrow, And they've got all this great new technology often in the history of technology, if you're using new, you can counter position using new technological platforms or whatever, and really stick it to incumbents. My experience so far has been, it feels different in this context than that. Actually fast moving incumbents, as long as they recognize it are better positioned.
Speaker 3
07:35
Do you think that's right? Has it felt that way to you? Any big thoughts that you have on that idea?
Speaker 2
07:40
I've been thinking about this a lot, both from an intercom perspective, and there are a lot of people trying to do customer support startups based on open AI as tech at the moment. And then I've also been thinking about as an investor as well, just looking at the startup scene, seeing what's new. The way I've concluded is, and by the way, in your example, let's just assume we'll take off the table, the idea that GPT can write the code to make it at the intercom competitor.
Speaker 2
08:02
Like let's assume all those things are true for everyone. I think if the way in which you would rebuild the competitor, let's say I'm trying to build a competitor to intercom to make it not personal for lack of a better word, let's pick say MailChimp or something like that. An email newsletter tool, everyone understands the gist of it. The question I would have is does the way in which you'd build the competitor, is it substantially different to how MailChimp is architected?
Speaker 2
08:27
Are there some fundamental assumptions made in the code base of MailChimp that are now entirely invalidated. They just make no sense anymore. And if that is the case, then I think, yeah, the opportunity is with the new startup because what MailChimp has to do if they want to compete is actually build a whole new thing. But I think in most cases, and if we just keep pulling the email MailChimp example, if you and I say, all right, hey Patrick, let's go build a MailChimp competitor and we're going to use AI and it's going to do AI to write your newsletters and AI to generate your designs.
Speaker 2
08:59
Brilliant. So we still have to build a massive deliverability platform. We still have to build link attribution. We still have to build email rendering and testing email rendering across a dozen different clients.
Speaker 2
09:10
We still probably have to build up a brand credibility and all of that stuff. And all MailChimp has to do is shout out to OpenAI for augmenting the text you're writing. So in that world, I think you might be in some 80, 20, 80% of the existing text still stands and 20% is going to be new stuff coming from OpenAI or coming from Anthropic or whoever we lean on. So I don't really see an advantage now.
Speaker 2
09:31
And the calculus I'd actually do when I talk to the companies about prospective investments, or when I assess intercom threats is basically how fast can they move, let's assume that you and me and our super hot new startup can move 10 X to speed of MailChimp. And let's say we conclude it takes us 3 months to build all the AI stuff. And it might take MailChimp 30 months. The question is, is 27 months enough for me and you to go and build every other feature in MailChimp to the same standard?
Speaker 2
09:59
And if it's not, then we're goosed. We don't really have a play to make there. To give you just 1 counter example, let's say me and you said, hey, we're gonna build a tool that, it's 1 of these advertisement management optimization tools. You log in every day and you see what ads are working and you change your spends and you cancel some ads and tweak some ads.
Speaker 2
10:17
You could imagine how a LLM-powered tool could basically optimize itself, consistently create new versions of ads, run those instead, run A-B tests amongst itself, and literally entirely manage your entire ad inventory directly without anyone ever having to log in. And in that world, I think a lot of the assumptions of the incumbent are totally invalid. They might have dashboards and reports and lovely funnels for configuring occupants, but it's all unnecessary. The AI is going to do it all for you.
Speaker 2
10:48
So in that world, if we're going to go after a space where, Hey, we would actually build this thing substantially different where we're building it today. I think all the advantage is within the entrant. That's when it gets exciting.
Speaker 3
10:59
So talking again about intercom, if with stats up to 50% of customer service requests is handled end to end without a human in the loop, the whole thing you just described is a gradient. It's never 1 or the other, it's somewhere on a spectrum. So Where do you feel like you fall on that spectrum of this is an entirely new thing versus, oh yeah, to build the rails underneath the delivery of this thing is like the MailChimp example, it takes 4 years.
Speaker 2
11:26
I think we're somewhere in the middle where whole workflows are removed in Intercom's case, but not the entire platform. There are still humans doing support and they still need a pretty rich and powerful support help desk and they still need a messenger to communicate through. All of that needs to be available through APIs.
Speaker 2
11:41
You still need a knowledge base. There's a lot of other stuff that still has to be built. And then obviously Intercom also has proactive support, so there's messaging pieces as well. So I think what we have had to do is reimagine and throw away parts of Intercom that assumed, say, our reporting infrastructure is very different in a world where 50% of the responses are going through AI.
Speaker 2
12:01
All of a sudden people care a lot about the AI reporting, whereas they didn't before. Measures like first response time are now no longer really valid because you have to subtract that AI and all that stuff. So I think if you can imagine of all the support work that happens, X percent of it's gone, the remainder still needs what we would call a classical high quality help desk. A good chunk of the work just disappears entirely.
Speaker 2
12:22
And then there are some features or some workflows, we'll see how we play it out where we'll probably heavily augment them with AI. So you could imagine analyzing What are the most common complaints from customers today? I can imagine we'll throw a lot more AI at that feature to remove the needle in a haystack approach and actually maybe the new version of that report is just a summary paragraph that tells you here's the biggest issues going on today. So I think that would be an example of workflow displacement.
Speaker 2
12:45
But I think in general, we believe the future of customer service will still involve humans and bots. And we care a lot about making sure that they can work together really well, and they can interoperate. And we have this idea of a flywheel where the humans help the bots and the bots help the humans. If we're right, anyone who wants to beat us also needs to have a pretty high quality help desk too.
Speaker 3
13:03
Can you help me understand the nuts and bolts of working backwards from a thing that I think everyone wants, and I'll call that thing an agent that is context specific to their business or themselves or whatever, where the feeling of chat GPT, which is so magical or GPT4 is so magical, but with all the context and the knowledge trained on me or trained on my business. And what you just said, people helping bots, bots helping people. Sounds like this is a process to take the generic broad artificial intelligence, whichever provider you use, and make it context specific or really helpful to me specifically.
Speaker 3
13:40
There's this process of retraining it or narrowly training it. So can you just literally tell me the steps of what you've learned about how that works? Because it seems reasonable to me that we're going in that direction. Everyone's going to have an agent that is tailored and aware of their specific circumstances and context and data and so on.
Speaker 3
13:59
And you're 1 of the first to actually build this process of training this thing. And I think of Finn as okay, Finn's the intercom agent. It is a multi-purpose thing, but it's very context specific to your product and your world. So what have been the steps, the mistakes, the considerations, the details of this are really interesting to me if this is where the world's going.
Speaker 2
14:20
There's a lot to this, but the biggest things I think we lean on OpenAI mostly to power Finns decision-making ultimately, like it's judgment and also it's conversational capability. So we don't have to build stuff like, hi, how are you? Oh, I'm good.
Speaker 2
14:34
How are you? All of that type of boilerplate salutation and all that open AI is just really good. And it's really good even when we want to get to a point where our customers can have Finns speak in their own brand and tone of voice. So like a surf shop and a bank can sound appropriate for their domain.
Speaker 2
14:48
So I think our biggest challenge initially was how do we get it to stay on topic and how do we get it to not apply knowledge that has nothing to do with your business. So if you go to any finance instance and ask it like, who is the president of America? It actually won't attempt to answer it because there's a slippery slope when you'll see a lot of people go down the slippery slope where you can force it to then take opinions that the company doesn't want to take. Getting the bot, despite all its infinite wisdom, to refuse to engage in topics that are anything other than this bank or this surf shop or this podcast or whatever, it's an important step.
Speaker 2
15:20
And then removing hallucinations. So it doesn't do its best. How would you say, please the user by making shit up there, the first 2 things in practice, how does that happen? Well, we discern the most important pieces from the user.
Speaker 2
15:31
So when they come along and say, Hey, how do I, whatever, reset my password or open an account or whatever, they look for the guts of what they're actually asking. You perform some vector searches across all the docs you've gathered. And that can include everything from your public knowledge base, the entire conversation that's led up until this point in the thread. So it can do things like, how do I do that?
Speaker 2
15:49
And it can infer what the data means, et cetera. So you have to perform a search of, given that this is the question, drinking in all of this context, what do we suppose is the right answer? And then how do we package that answer in a way that sounds again, on brand and appropriate for where the conversation is at right now, depending on the tone it's taking. So that's most of the work.
Speaker 2
16:08
It's about the vector search is really important. The knowledge sources are the single biggest variant. When we see people like amazing Fin instances and like weaker Fin instances, it's literally how much have you fed it? How much have you given it access to everything about this user, everything about your product, your product docs?
Speaker 2
16:25
It's been impressive how Fin can even read stuff we didn't expect, be it like API docs or things like that, to actually suggest extra answers or further reading for the users. But that's a lot of what it's been about. And for us, 1 of our challenges was coming up with a ability to benchmark where different versions are at because We can tweak and change prompting or we can change model and point to the GPT 3.5 turbo versus 4. And it's nuanced a lot of the time to see the differences until you spot them.
Speaker 2
16:54
And then you realize, oh, this thing is actually going to go and recommend our competitor or it'll go off topic or whatever. 1 thing we've learned of late is there are so many people who have built their wrapper around GPT-3, 5 turbo. And if you just test them all by asking a really obvious question and getting a really obvious answer, you're going to conclude they're all the same bot. It is unfortunate.
Speaker 2
17:14
And I say unfortunate for us because it's in the nuance and in the more likely scenarios, that's where you realize some bots are very good and some bots are very bad. They can all say, what does X do? Or what does this company do? It's really when you get into specifics about like a refund request, or you try and make it go off topic.
Speaker 2
17:32
That's where you see it's under performance or you see the lack of prompting, the lack of training, et cetera. And even yesterday we released a bot buyer's guide because we're trying to teach customers about these differences. Obviously you hate having to market nuance. We'd rather we were the only fin in town, but that's not the case.
Speaker 2
17:46
It was the case for 3 or 4 months, but now there's a lot of YC startups doing it. Now we're into who's got the most, who's taught this true to most. Ultimately, I still think though, as I said earlier, the battle will really shift being who has the right platform, who has the holistic solution for customer service.
Speaker 3
18:00
If you think about the examples you've seen so far, some of which have crazy staggering numbers that you quoted in terms of how much of the previous product that was human focused is now and then handled by a machine. Where do you think the natural endpoint of this is for your business? I'm curious what the adoption's been, like how many of the normal intercom customers are using FIN in some way, shape or form.
Speaker 3
18:23
So what's the friction to getting on this train and where does it go? Is the natural endpoint that customer service is 80, 90% handled by AI. And it's just a strange edge cases get spit out to a much smaller support desk. Where do you think this goes and what have been the frictions to adoption?
Speaker 2
18:40
The largest barrier is the people aspect. So it's customer support, generally speaking as a human operated industry. And if we move a button in intercoms inbox, we get our support team just get fed fire by our customers.
Speaker 2
18:54
Cause they're like, yo, if you're going to make a single change to this inbox, we have to have an offsite to retrain our entire staff. And when you realize that that could be hundreds of people, we have customers who have thousands of intercom seats. So 1 single change of, Hey, it used to say send and close. And now it just says send.
Speaker 2
19:09
That could literally set large intercom instance back weeks in terms of support volume. So we're very delicate about how we make these changes. The other side of that friction is our customers are very, very slow to adopt for a very good reason because they say, OK, we will try. So we're seeing a lot in, say, Finn's case.
Speaker 2
19:26
We have many, many, many hundreds, if not thousands of Finn users spitting out tens of thousands of answers on a regular basis, et cetera. But what we're seeing is everyone wants to dip their toe. They don't want to say let's point Finn at the entire support volume. What they say is, Hey, let's turn Finn on, on the weekends.
Speaker 2
19:43
Or they say things like let's turn Finn on if, and only if the question regards resetting a password or something like that. And they're doing that because they want to get a sense of how is it performing in a smaller use case. And then ultimately we've only really been live to literally everyone, I think about 8 weeks. We have a lot of planned, larger migrations where our customer support teams are going to switch over to a massive amount of volume going to Finn.
Speaker 2
20:08
But they're busy working on their knowledge bases and they're busy working on their snippets, et cetera. The snippets being the things that Finn reads to produce answers. So I think what we're seeing is a lot of people preparing for this world, but the friction is definitely how do I get my support team on board? How do I make sure that we're doing all the right stuff?
Speaker 2
20:24
Even in a lot of cases, all our help docs are out of date and we didn't realize it, but our customer support team were saving our ass and now we need to update our help docs before we let Finn in. The other question we ask is where does it stop? And I think that's genuinely something we don't really know the answer to. We will very shortly have live in market this Finn generated snippets feature where Finn will read your entire conversational contacts for your business going all the way back.
Speaker 2
20:48
And it'll learn from every live conversation that happens and it'll produce snippets, little nuggets of knowledge to augment its own understanding of your business along the way. Our vision for that will be that your customer support team deals with any common query they see. They should see it for the first time and the last time. They should basically answer it once and believe that they'll never see it again.
Speaker 2
21:06
Where we want them to spend their time is on high value, brand building, high urgency, high impact conversations. We want them spending time in proactive support. We want them reaching out rather than like dealing with customers when stuff goes wrong, we want them reaching out to make sure that everything goes right. That's the future world we want for support.
Speaker 2
21:21
In terms of what percentage could actually of raw inbound could go true, it will vary vertical by vertical. If you think about, say, an e-commerce store, there's really only 10 questions you ever ask. It's where's my order? Why is it late?
Speaker 2
21:32
I want to refund it or I've broke. What's most exciting to us is what will this snippets feature do? Because you're just going to keep aggregating knowledge. If you take intercom, we do 20, 000 support conversations a month.
Speaker 2
21:43
So about a quarter million a year. If we just look back 2 years, that's half a million conversations on top of our entire knowledge base and our API docs and all of our training material and our education material, it's a huge amount of information about the product for Finn to consume. I can't yet tell you where we will be, but I suspect it's still going to increasing Our own resolution rate and all of our customers resolution rates only go upwards. And then what the question is, where does it start to asymptote?
Speaker 2
22:06
We've yet to see it, but we are only about 8 weeks. So we're still at the precipice of a lot of these changes. I think
Speaker 3
22:12
what is your overarching product philosophy that sits behind all this stuff? At the end of the day, this is just new technology. It's all like any technology.
Speaker 3
22:20
It's just something that lets you do something for someone else. You have a overarching product philosophy that is the bedrock on top of which you make all these decisions.
Speaker 2
22:29
Yeah. We have a customer service manifesto, which is our set of beliefs about how the world of customer service will change over the next few years. And we've had 1 before, but before AI, we were mostly about conversational. We pioneered the idea of the chatting inside your product and on your website.
Speaker 2
22:46
And that was, we went hard on conversational. Today, our manifesto really has 4 core ideas in it. The first 1 is bots and humans will work together. That's a very firm belief we have.
Speaker 2
22:56
We believe humans are essential. We want to supercharge humans with AI and we want AI paired chatbots to reduce a lot of the work for the humans. Our second belief is that support should be proactive and reactive, which means you should be able to get out ahead of problems. Our third is, all support needs to be conversational and omni-channel.
Speaker 2
23:14
So we talk to customers any way they want to talk to you, anywhere, anyway. And then lastly, all of this has to work together. So you can't try and stitch together 3 or 4 different tools. We often save customers from a world where they have a ticketing tool, a docs tool, an outbound tool, a different messaging live chat tool.
Speaker 2
23:31
And they have some Zapier powered or something. 1 of those cool integration powered things where they try and stitch it all together and get themselves some source of truth, but it really doesn't work very well. And we do all of that. The higher level thing is in service of an internet full of better customer service.
Speaker 2
23:45
Our mission from 2011 has been make internet business personal. And that's really what we're about.
Speaker 3
23:51
If you think about the choice between providers underneath all of this, which is something I don't think you and I have talked about yet. How do you approach that problem? GPT-4 has, it's 1 Kleenex battle or something.
Speaker 3
24:03
It's the thing everyone thinks of, but the reality is there's lots of tissue providers and you know, see if there'll be more every year. How do you assess these things? They're so complicated and nuanced and interesting. You mentioned Anthropic earlier.
Speaker 3
24:16
There's open source ones out of Facebook.
Speaker 2
24:18
Lama, there's Cohere.
Speaker 3
24:20
Yeah, there's all these cool things happening. So talk us through that part of all this. If I think about Azure versus AWS, you can build your software on any of the big 3 cloud providers or something with little differences.
Speaker 3
24:32
Is it the same story here? Is it just subtle differences or is there a clear leadership from OpenAI or someone else?
Speaker 2
24:39
From our perspective, I suspect it'll end up a lot like the Azure versus Microsoft type thing. Certainly it's trending that way. We started with OpenAI because they were first out of the gates and we've been partners with them for quite a while and we've had early access to previous versions of all this tech.
Speaker 2
24:55
They do seem to be leading the way too. GPT-4, when they released it, was definitely a head and shoulders above everything else that was out there. So I think we're going to partner with whoever we think is going to give us most access to the best tech. The reason we change will be probably more of either somebody else has better tech, which has yet to happen, or it'll be like some later day optimization of, Hey, you could imagine something like Anthropic is available in an EU instance of Amazon and we can't get GPT over there.
Speaker 2
25:23
So let's swap or let's, you can imagine some version like that where we do it for business reasons. So we'd either, if we were to go and chase anyone else that I would probably either like accessibility or availability reasons. It could be tech reasons. We just haven't seen it yet.
Speaker 2
25:36
The last 1 where I just, I'd be surprised if it shakes out this way, it's just price. So GPT-4 is expensive. And as a result, FIN is perceived to be expensive. We charge 99 cent resolution.
Speaker 2
25:45
It's way cheaper than you'd pay a human, but way more than you'd guess. Cause we get an awful lot of people will just say, but it's just an API call. How can you charge that much money? The reality is that's what OpenAI charge.
Speaker 2
25:54
So that's why that's how it checks out that way. But I could imagine if prices continue to run hot and people start to release substantially cheaper versions, there are genuinely features that would be prohibitively expensive for us to build today. To give you a simple example, summarize every conversation in real time as they happen. We have 5, 600 million conversations a month that would bankrupt us.
Speaker 2
26:16
However, if somebody gave us a substantially cheaper version, all of a sudden that's back on the table. There are pricing implications here. We haven't bumped into them yet. And right now we're still in the innovating and pioneering phase.
Speaker 2
26:26
We're trying to do as much cool stuff as we can. So it doesn't feel like we're yet in the mode to optimize. But I will say I'm very, I take a lot of comfort in the fact that there's so many strong competitors here. It tells me that price will go down, availability will go up, and the competition to improve the tech will be pretty high.
Speaker 1
26:42
99
Speaker 3
26:42
cents per resolution, how does that stack up to the cost on average across intercoms customers for a human led resolution, do you think?
Speaker 2
26:50
So the first variable is, is your support done by a citizen of the United States working in San Francisco, California, or is it outsourced to an agency? And if so, where is that agency located? We've never seen anyone get a substantially cheaper than those 99 cents.
Speaker 2
27:06
There is a nuance to this. If the answer to the question is no, then an agent can do 60 nos in an hour. And you're probably not paying them $60 an hour, but that's rarely the case. Most of the time, it is a lot of time taken to onboard agents, train them up, get them to be able to deal with their complexity, get them to manage multiple back and forth.
Speaker 2
27:24
But for sure, some people will show me an example and say, that's definitely not worth 99 cents. And that's true. We don't know a priori whether or not the thing is worth answering until we answer it. But in general, we see most support reps paid somewhere between like the floor here would be, I dunno, 8, 9, $10 an hour or something like that.
Speaker 2
27:41
We haven't ever outsourced to extreme low cost providing areas. We've never had to support at all, but I'm sure some will tell me you can get it for like $2 or $3 an hour. Well, let's see. But in all these cases, I think most of our customers who are B2B tech companies, generally speaking, they're paying more for their actual support team.
Speaker 2
27:58
And then the other aspect that people often forget is there's a behavioral difference between a user getting an answer in 0 seconds versus in 7 minutes. So if the question was, Hey, I've just signed into Asana and I wanted to know how to create a project. If you answer that question immediately, they go and create the project and they continue to expand and growth goes up. If you answer that question 11 minutes later, they're on a different tab signing up for a different project.
Speaker 2
28:22
So there is value in instant support that goes beyond simply job done.
Speaker 3
28:27
Yeah, it's totally fascinating. Obviously you would expect the 99 cents thing to come down. And also the customer support agent sitting in a call center just all of a sudden feels like dystopian or something.
Speaker 3
28:37
It's just a job that, but for the most valuable conversations should be automated on top of a knowledge store that a company has. And I wonder why there hasn't been a company that does this for everyone. So you talked about your problems, like, okay, we gotta solve, we gotta make it not answer irrelevant questions and start having opinions and hallucinating and all this stuff. And then we need to train it on the customer's knowledge store.
Speaker 3
29:02
There's a process that you've built that let your customers work. Why isn't there a company that you could just hire to do all that for you, given that it seems like everyone's gonna want 1 of these agents in their business? What matters is the data. It's not, It seems to me like what's valuable is the companies that have great data, not the flow.
Speaker 3
29:20
So why isn't there just a provider that does that part for everybody?
Speaker 2
29:24
You know that phrase that people are too optimistic about what's possible in the short term and too pessimistic about what's possible in the long term. That's very much how I see the commentary around AI. If you said, all right, drink in all the S1s that are published every year and produce stock tips or something like that, the drinking in piece isn't actually hard.
Speaker 2
29:42
It's hard to find a moat there. I think Basically what is proprietary and easy to differentiate on is either your data store or the actual workflows that you build on top of the LLM's interpretation of the data. So it is definitely easy to point an LLM at a large source of information and even point it out, I'd say like true, something like Pinecone, point it at a vector search across that so we can filter it down and really get to the specific bits you want to see what's genuinely hard is making it useful, making it do something that's actually displaces a large amount of work. And if you take an area like finance or legal, I think the threshold or the requirements you'd have are pretty high in terms of trustworthiness.
Speaker 2
30:22
And I think the challenge would be being able to make a bold enough claim that you can stand over to say, Hey, this thing will not get your conclusions wrong. If you could package that up and then integrate it into an existing workflow, I think you would actually have a great opportunity. But I think simply the ability to filter information and consume it, and then given this thing, answer this question. That's not the hard bit.
Speaker 2
30:42
The hard bit is make that into a product. We're touching on the difference between a feature and a product, if you know what I mean. The ability to consume information and spit back some stuff is a feature. The product around it is reporting, accuracy, collaboration, all of the other shit that actually makes it into something that a business would adopt.
Speaker 2
30:58
That's, I still think a challenge that you have to take on if you do a startup in this space.
Speaker 3
31:02
You said earlier that this was on the borderline of being a red alert or something for intercom. Talk about how that gets managed inside of a company. I imagine there's lots of companies that are looking at this as both an existential risk and opportunity.
Speaker 3
31:16
So what do you think you did well, poorly? What advice would you give other people facing down this seismic change in terms of just managing and steering an existing company through it?
Speaker 2
31:27
I think the thing we got most correct, and I put a lot of credit to this, although he won't take it, I own our CEO, was just the speed of decision making. I think it's very, very easy for a company to say, hey, it looks like this open AI stuff is going to be a thing. Let's form a small cross-functional team and we'll do an investigation and we'll get a readout from them in 3 or 4 weeks.
Speaker 2
31:49
And then we'll finish out the quarter's roadmap as we planned, and then we'll take stock and see where we're at. And I think that's probably the norm behavior for a lot of companies is to work in that methodical way. And that's not what we did. What we actually did practically was abandoned multiple projects, deleted multiple roadmaps for our automated support group, for our ML team, for our inbox team, literally binned everything they were working on and worked on this thing instead.
Speaker 2
32:13
So startups often talk about how important speed is or whatever. And I certainly do this. I often feel when I'm talking to founders that I've invested in or whatever, I'm always trying to say, it's not their speed, it's your speed, how quickly are you going to make this decision to unlock them? If they know you take a week for a decision, they're not going to come to you quickly with stuff.
Speaker 2
32:29
So in this exact example, Owen was like, let's go. We weren't going to wait for the data. We weren't going to wait for the evidence. We weren't going to wait for our competitors to say that they're also looking at this.
Speaker 2
32:38
We were just, it's go time. So I think that is probably the single biggest unlock that we were able to fire the starter pistol that quickly. I was in, I think like 14 hours after chat GPT dropped, we were able to go and we started working on it. And then we probably also had a small advantage.
Speaker 2
32:53
We'd been dabbling in AI for years. We have a live product called resolution bot, et cetera. So my advice to other companies, I think If we go back to the topic I said about the ratio of in your new world, how important is AI and how important is everything you've already built. I think you need to work out if there are core product assumptions that are just no longer valid, you need to down everything, nothing else matters.
Speaker 2
33:15
If you get this wrong, nothing else matters. If you get this right, nothing else matters. It's the only thing. I think if it's not that, is it, Hey, this significant workflows, we can just cut out and rebuild that use AI.
Speaker 2
33:26
So like reporting or whether it's image creation or something like that, then I would say like, it's almost too late if you're trying to be a first in your category to say we have blah, but with AI, I think everyone's over this idea of. We also have a lightning bolt in our UI that if you click, it can invent some text. It's a done deal at this stage. You now need to look at, in my opinion, proper end to end workflow automations, I would start fixing my gaze a lot more on what pieces of work can we remove entirely?
Speaker 2
33:54
Cause that's where the actual formula changes. We often talk internally about the formula of support, which is number of inbound times the amount of time it takes to solve a conversation divided by the amount of support wraps and all that. We look at all that. If you can just chomp out a massive amount of 1 of the numbers in that formula, that changes the optics of the business entirely.
Speaker 2
34:12
And I would encourage other startup founders to say like, Of all the tasks that have to happen in our product, which can we remove as much of as possible, take away so much of the, what would now be seen as undifferentiated heavy lifting, where can we just delete it? And that's how you have a winning product.
Speaker 3
34:27
That's interesting. So if you think about the many of these things where it's, I'll call it co-pilot for X, Y, and Z. I think that you're saying something very different.
Speaker 3
34:35
Don't build another co-pilot. Everyone's doing that. Everyone's adding AI to their existing thing. Instead say, what part of the customer's workflow can we just literally kill?
Speaker 3
34:45
Just have go away completely without them. We almost don't want customers to interact with AI. We just want things to happen as a result of AI that they don't even think about.
Speaker 2
34:55
That's the highest order goal you can go for. And there are cases where you can achieve that end to end pure automation.
Speaker 3
35:03
I mean, you've done it.
Speaker 2
35:04
Yeah, exactly. I wouldn't say that, the copilot style things are useful. If you take, say, I should say, get a copilot, the most famous of them.
Speaker 2
35:13
It turns out finishing the sentence in code is something every programmer does dozens of times a day. So if you look at how often does the thing happen and how long does the thing take? If you start writing a 4 looping code, sure, it doesn't take a lot of time for you to finish the bracketing and all that stuff, but you might do that, whatever, 20 times a day. So being able to hit tab instead of having to type anything, There is a, if you just multiply the 2 things out, frequency times time spent, you'll actually get a sense of where the value is.
Speaker 2
35:36
A non-AI example of this would be like spell checking or something like that. It's just like, hey, people type all the time, they have to go back and fix. If you can do it in Fordham, that's better. I think you can look at it from that perspective, but I just think a lot of startups tend to go for the copilot thing because bluntly, I think it's the easiest thing to do.
Speaker 2
35:53
Hey, write an idea for a blog post and we'll suggest the opening paragraph. Whoop-a-dee-doo. It's not really a huge thing. What might be a huge thing is find the most common SEO terms that we're not ranking for and suggest articles we should write and write the articles and go to Dolly and create the images and then suggest them back and then criticize them.
Speaker 2
36:09
And then now I sit down in front of 12 suggested articles that are fully spec'd out. Now I'm starting to think, huh, this could be replacing a workflow. Everything else is just a little bit of a nicety along the way. I think.
Speaker 3
36:21
As an investor, how much have you seen that is exciting in this spirit?
Speaker 2
36:26
I would have to say I see much more of the, what I would call like the easiest API call type AI. That is honestly right now, the majority of what I see is not amazing. It's very much just, we also know how to summarize a piece of text or change the tone of a piece of text or whatever.
Speaker 2
36:47
And that could be like, we're a sales tool, but we can guess the opening introduction in your paragraph or whatever. And I think those things are basically dead in the water. I think stuff like say what Rewind AI are doing, I think is far more powerful. Drinking in a massive amount of data and then giving me a natural query engine on top of it.
Speaker 2
37:02
I think that's really exciting. I think some of the visual stuff I've seen, be it mid-journey or be it like, there's 1 I've invested in called Kettle, where again, you just describe the visual of what you want and they produce a vector. And what's really cool about a vector is you can actually tweak it yourself then if it's not exactly what you want it. I've seen really good examples of that type of thing.
Speaker 2
37:21
But I do believe that we have yet to get to the bottom of where the AI, the high order stuff I'm describing where it's like a complete automation. I think a lot of that's still to come. The question for me is, will it come from the incumbents or not? That's still, I think, an open question in a lot of these industries.
Speaker 3
37:38
When you're having conversations with peers at other software companies, you and I were introduced by John Collison at Stripe, what are the most common discussions, interesting discussions that you're having about all of this that's unfolding?
Speaker 2
37:52
The biggest question I always try to zoom in on is, I know AI is really important and it's core to this new workflow that you're going to automate or whatever, talk to me about the rest of the products that you're building around it. And this sounds like it's maybe perhaps not the sexiest answer to give you, but I think a lot of folks have forgotten that you still have to build world cost software these days. And genuinely it's not, I think you said previously on Twitter, something to the lines of strategies for amateurs and executions for winners.
Speaker 2
38:21
And I think having a cool idea for an AI feature is genuinely can be unique and maybe you spotted the capability or just haven't, but in most cases you still have to go and build an incredible piece of software around it. So you might have a unique twist on how project management should be done, and it involves a bit of AI, and that's awesome, but you still need to have a PM tool that's as good as linear, and that's still going to be a huge amount of work. So What I try to do is shift the conversation to there just to make sure that there's actually something behind this. Because my fear in so many of these AI cases is that we're all just pinging text over to open AI.
Speaker 2
38:57
And these prompts and these bits of text, they might be proprietary. You might have a bit of an edge there. You might do some clever shit client side before it goes over there, but it can't be that you're the only person who's worked out the right prompt. If you're investing in something that is built on AI, the thing that you're doing still has to be pretty brilliant, Which might mean really good integrations, really rich platform, beautiful UI, et cetera.
Speaker 2
39:20
The only other area I'd say I've had some interesting discussions in is, and again, I don't know if this will fall into incumbents benefit or startups benefit is this idea of chat driven UI. So I'm sure there is at least 1 piece of enterprise bloatware that you use in your day-to-day life that is just awful. For me, it might be a say Workday or Coupa or 1 of those tools where the tool is just so deeply complicated and all I wanted to do was file an expense or request a day off or something like that. And the next thing you know, I have 9 tabs open and I've got 3 drop down menus and I'm still none the wiser.
Speaker 2
39:50
I really, really in those tools want to press command J, bookdes, October 17th, return or something like that. I think chat UI will be a massive impact to that whole industry where the way I describe this is when the user knows what they want to do, but they don't know how to do it in your insanely powerful or complicated to be complimentary. When the user knows what they want to do, but they don't know how to achieve it. That's when chat UI is really going to take off.
Speaker 2
40:16
And I don't know if you saw equals the spreadsheet tool.
Speaker 3
40:19
Yeah, a little bit, not much.
Speaker 2
40:20
So equals is basically a spreadsheet tool. 1 of the things that built in is AI and the other thing they have is live data sources, but the cool thing I like about the way in which they've used AI is I don't know Excel query language very well. And what equals I don't need to, I just say, sum up all of these before that were expensed before January 17th and multiply it by the CPC return and it works it out.
Speaker 2
40:40
So you can literally write Excel queries in natural language. You can also write SQL queries in natural language too. So what that does from my point of view, that has made the spreadsheet an infinitely more accessible tool to millions of people who never were going to learn Excel and I think that's where you can blow up your total addressable market just through the magic of a chat UI. So I think That's another area of excitement for me where when I see a sustainable advantage that the chat UI would open up, that's another thing that gets me excited.
Speaker 3
41:08
How do you think about the risk side of this? You mentioned earlier finance or banking or maybe healthcare places where Certainty is more important than in other places. And in Excel, I guess it's the same thing.
Speaker 3
41:21
If this is lower stakes or approximations are fine, then that sounds amazing. But if this needs to be precisely right, then great, I'm glad I can write the query for me, but then I still gotta go double check that the query's doing the right thing. And in customer service, I'm sure there's elements of this too, or some things are really easy to handle. You're just pointing someone at the right document or something, but sometimes it's criticality rises and the cost of error goes up and then you start, you start to then worry about these models.
Speaker 3
41:47
So how do you think about tail risk of a big complex thing that we don't really know how it's working, but it just works most of the time, but we need 100% of the time.
Speaker 2
41:57
Yeah, and to make it more complicated, because you've unlocked a wider addressable market, the person will not identify the mistake. I won't know if there's a bug in the Excel stuff, cause I didn't know what it was supposed to look like in the first place. I think this is a genuine risk.
Speaker 2
42:09
So the answer I'm supposed to give you is, and that's why there's always going to be human in the loop and you turn this around. So rather than doing it for the dessies of the world, you do it for the person who actually builds spreadsheets for DES, and instead you speed them up. And I think that works to some degree. But we even see this with, say, assistive driving AI, where the person has their hands on a wheel, but they're not paying anywhere near the same amount of attention as they used to, because they know the car is actually driving itself.
Speaker 2
42:32
So you actually have this challenge of when criticality is high, should AI basically be left out? Do we need to have a higher threshold? How do you define that threshold when it is purely generative? It's hard to say.
Speaker 2
42:44
I can tell you what we do in Intercom is we have a separate product to FIN called Custom Answers. And what Custom Answers does is more of, how would you say, an old school traditional AI existed before November. And in that case, its behavior is quite different. It works something like If this query looks like it's relating to an important topic, let's say refund or locked out of my account or authentication, then, so that's your AI there, it's just like if fuzzy logic matching then, and then we have a very specific answer.
Speaker 2
43:14
And what we do is we use custom answers to target really specific things where we have a precise set of steps and we actually can't afford to have generative conclusions. We actually need to control every word that is said. A lot of our customers use FIN and DIS together and they just effectively splice out stuff that sounds incredibly important and let Finn take care of the rest. But even at that, there's still going to be a weakness there somewhere.
Speaker 2
43:37
If you keep scratching and sniffing, you'll find something that actually turned out to be more important than it sounded. I think this is the fuzzy world we're all heading into where Things will be possibly more wrong than they used to be, but they'll happen in real time as opposed to taking days, weeks, months, depending on the task being optimized. I do suspect this is why you won't necessarily see GPT used in doctor's diagnosis apps. They might speed up the doctor, but they'll still need to be human in the loop.
Speaker 2
44:02
And we just have to hope that say the driver in the AI augmented car, that they're still fully switched on to what's actually going on. And they're not just copying and pasting shit out of chat GPT or whatever. But I think this is a new world we're entering into. There's no denying it.
Speaker 2
44:14
There are gates and there are railings, but there are no, I don't think any guarantees. It's hard to take the new technology into society and prevent all its downsides.
Speaker 3
44:23
As an investor, I'm really curious how you are, whether it's different or the same as you've always viewed this as an investor, but the first time we talked to you, I had some really interesting thoughts on just the things you look for in young companies. Even since Intercom started, the friction and expectation for starting companies has changed. There's so many of them.
Speaker 3
44:41
YC batches are so much bigger. If there's a whiff of an opportunity, all of a sudden you get 20 people leaving their jobs to start a company. It's amazing. It's great.
Speaker 3
44:50
The world benefits from this for sure. But I think it makes investing harder, especially given sometimes the prices are quite high. How have you evolved as an investor? What are the things you're looking for?
Speaker 3
45:01
Any philosophical changes over your investing career, and then I'll obviously talk specifically about the change that LLM's have on that too.
Speaker 2
45:09
Early days, I used to get fooled a lot by just a great looking product. I'm a filled is probably the wrong word. Some of them actually worked out pretty well.
Speaker 2
45:16
But just, I thought if you could build software that that was enough almost. And in some cases, some of my earliest investments, that actually turned out to be enough of a judgment, but not certainly in later years. Beautiful UI and stunning landing pages became more commoditized or like maybe more easy to do for people. And as a result, it was probably easier to fool me at least, but I'd suspect probably easier to fool half the industry.
Speaker 2
45:37
So I think I've learned now that a product can be a really, really nice, beautiful execution and Perhaps still just not work out. And I actually had 1 of those recently. It was a meeting, like a zoom competitor that was built with AI inside it for identifying action items and all that stuff based on what people were saying in the call and it would produce meeting notes afterwards, and it was incredible. It would produce a highlight to reel up the conversation.
Speaker 2
46:01
So you could zoom into the exact specific moments that mattered, et cetera. It sounds cool. Problem is no 1 wants to pay for that on top of the hangouts on top of zoom. Full stop.
Speaker 2
46:09
But the product, if you saw the product today, like that's insanely brilliant. It's awesome. It looks great. It works great.
Speaker 2
46:15
I've used it many times, et cetera. But if you try and take it to a company and say, well, you pay $7 a head for this on top of your zoom fees, and you're already probably buying the G suite anyway. The answer just basically says no. So that's 1 thing that I've just become more wise to, which is, and it sounds so stupidly obvious when I say it like that, but the route to market times day users propensity, put their hand in their pocket is a real thing.
Speaker 2
46:36
And in those cases, zoom already has the route to market. And G suite has already there with their pseudo monopoly on all things, productivity inside a company. Another change I've just been trying to distill of late has been, I find myself asking startups the same question a lot lately, which is what is the single thing you can say that is unique to you, valuable to your user, true and simple and all 4 things matter. And most people fail on 1 of the 4.
Speaker 2
47:02
It's either too complicated a pitch or other people can say it, or your users don't give a shit or it's not true. And that has been a really impressive. And I often say this, if it's a Mirando company, I barely know, or a lighter week intro. I often say it as a pre-qualifier to actually bothering to take a call or even sometimes reading a pitch deck.
Speaker 2
47:20
And that's a maturity that I didn't have. I would have heard the pitch 5 years ago and gotten really caught up in how nice their product looked and all that. Whereas now I find if you fail at that hurdle, the chances of you being a really successful business, the outsized outcome that makes the angel investing makes sense is pretty slim. And then the third 1, I just continue to beat a drum about it's just execution.
Speaker 2
47:38
And I won't say too much about it because I know you agree. If your idea is any good, you should assume it'll become commonplace and everyone will have that idea because they're going to see your version of it and then everyone's going to try and do it. And if someone does a lot better than you or can do it the same as you, but faster, you're going to lose. It won't be close.
Speaker 2
47:54
You will definitely lose. First really doesn't mean a whole lot unless you helps you build a bit of a brand maybe, but you'll get outpaced pretty quickly once somebody has a feature you don't. So I care a lot about do defenders get that. And if the founding team isn't super technical, as in they can't design, build, execute their own software.
Speaker 2
48:09
Then I always worry about that because they often don't.
Speaker 3
48:12
How did Intercom itself solve that execution problem? Because I think relatively early on people realized, oh yeah, this is a thing. This is a good idea that there's a simple digital way to interact and connect with my customer in this space that they are.
Speaker 3
48:30
I'm assuming, I don't know much about intercoms early history, but I'm assuming that lots of people tried to do this and you won. So is the simple answer, just execution. You just went faster and basically followed the advice you just gave about investing, but for building.
Speaker 2
48:42
I'd love to say yes, but it would sound arrogant. I will say like We had a shitload of competitors, copycats, people who were just literal right-click view source. Let's have some of that messenger code.
Speaker 2
48:52
And there were still many, there are people who literally just copy our source on a regular basis. I think ultimately what our customers know is that if you want the latest and greatest intercom, there's a company that has it. And I think a bet on intercom is a bet on the persistent position as the people who are doing the newest stuff best. And that's why it surprised basically none of our customers when we launched AI shit, because the tone we heard back from a lot of them was like, of course you guys are the first.
Speaker 2
49:18
For a lot of our customers, that's why they sign up. And we talk a lot about this. We even publish, if you go to intercom.com slash changes, you'll see the rate of release from our product team. We're just consistently grinding, making things better.
Speaker 2
49:28
I think I've heard it glibly referred to as like the gingerbread man strategy, which is run, run as fast as you can. You can't catch me. That's our thing. You can copy and you can copy, but it's going to be consistently yesterday's technology tomorrow.
Speaker 2
49:41
And that's not what customers want. They want the best stuff out there. So I think we do talk a lot about speed internally. If you talk to any of the intercom folks, still be like, they're sick hearing about it.
Speaker 2
49:50
And it's not just speed against encoded really hard, hands on the keyboards type stuff. It's speed is in processes, decision making, et cetera. I think for what it's worth, PetTieria has all SaaS's basic UI on top of databases. So everything is copyable pretty quickly.
Speaker 2
50:05
The goal is to get a position that looks daunting to copy. If you're trying to build a project manager tool and you're saying, Hey, let's rip off linear. You're going, Oh, well, there goes the first 2 and a half years of our roadmap. It's a tough 1 to take on.
Speaker 2
50:16
Same with Stripe. And I'd hope same with intercom or whatever. You just look at me like, shit, we really have to build all that. We're going to be at this quite a long time.
Speaker 2
50:23
And by the time we've done it, those guys move fast. So by the time we get to where they are today, they'll be long gone. And that's the hope that you can build, but just raw product momentum as a motor, if you will.
Speaker 3
50:33
Do you worry about, I guess I'll call it the red queen effect of that style of building relative to, I'm trying to think of an extreme example, like Visa or something, which notoriously is an extremely good business because they built an unassailable competitive position, but there's no speed at Visa. If anything, it's better to have more bureaucracy, like don't screw up this position that we've carved out for ourselves. There's no product velocity at Visa.
Speaker 3
50:57
Running faster than everybody seems like a great way to win, And I think it is, but do you think it's necessary, but not sufficient? And ultimately you need to get yourself in a visa like position. Even if you wanted to copy this and you also moved really, really fast and had unlimited resources. There's something about it.
Speaker 3
51:14
You just can't even copy whether that's, this is like classic moat question. How do you think about that for Intercom?
Speaker 2
51:21
I think this to me is honestly where a brand comes in and that's ultimately you need to build and we're in the middle of undergoing pretty extensive distillation and rebrand, if you like, of what intercom is all about. But I think you need to transfer the energy that's felt in their product momentum into being a sustainable brand position to ultimately get to a place where it's just, why wouldn't you just use intercom? That's I think the best way to turn a dominant product position into a sustainable thing where you become the, it would just seem odd to not use Slack or Stripe or Figma or intercom, whatever.
Speaker 2
51:54
It's just, why wouldn't you? And I think because all those tools you have to assume will get caught eventually is in IDN or something like that. Will whatever Stripe release now, they'll have it in 2 years. So there will be some eventual, there is not infinite runway in all feature sets.
Speaker 2
52:08
Eventually you start releasing shit just for the sake of it. And that's a dangerous place to be. So the goal as you're doing this is to transfer the credibility that exists provably in your software into the brand and ultimately capture hearts and minds from that point of view. And then there are other like more tactical things, community, evangelism, advocacy, making sure you reward the people who use your product a lot.
Speaker 2
52:27
Then there's the technical version of that, which is integrations and interoperability. Do you fit much better in the tech stack than everyone else does? Do you sign good partnerships and good data share agreements? If you use Intercom plus GitHub or Jira, it works really well together.
Speaker 2
52:40
And if you try and use NuApp with the same, it'll be harder for them to get the same co-promotion, the same partner status, whatever. So there's other aspects to it as well. But I think job 1 is to honestly have the best product and the best way to do that is to move really, really fast. Job 2 is to transfer it out into being a brand position rather than a technical position.
Speaker 2
52:56
And then job 3, I think is to then expand your tentacles into everything. As I said, the hearts and minds of your customers, partners, et cetera.
Speaker 3
53:02
1 of the prevailing ideas in the business world is that the pure software business is the ultimate in business because it has very high margins that can have very high retention. Sometimes it doesn't necessarily require a ton of upfront capital to get going. It's just a beautiful thing.
Speaker 3
53:18
If you're on like a debate team or something, and I force you to take the opposite side, to say, actually, like, here's the bad things about software businesses, here's why it's not a panacea, here's why you should consider building something other than software, what would be your debate points? And I ask this as somebody, obviously that's done this, that's succeeded in doing this at a company that's grown very large, what would be your contra points in that debate?
Speaker 2
53:41
I think the 2 things come to mind, obviously it was a debate I'd prepare, but 2 immediate things come to mind. 1 is there are very few durable notes in software. So whatever it is you have, it's almost by definition going to be copied.
Speaker 2
53:56
Someone will infer the database structure and right click, copy the UI. And all of a sudden they have the goods of your product. So that's a challenge. And we don't really play the patents game.
Speaker 2
54:06
And we don't really play the exclusive partnership game in software in general. We generally tend to put it on the internet for everyone. So whatever your piece of software is, assume Everyone's coming for it. And then also the second piece, which is just as important.
Speaker 2
54:19
Software has a really, it's very perishable. What it took to be best in class and project management, even just 3 short years ago is nowhere near good enough for today's standards. And there are not a lot of categories where you can have a great product and also know that it's going to be dated as hell in 36 months. But software is 1 of them.
Speaker 2
54:39
So you have to continually reinvest to maintain your position. That's the first, if you compare that with other areas, other non-software areas you might go into, you'd find industries that don't have those 2 traits of insane perishability and lack of a motive of any sort really. The third 1 is more of a, perhaps a nuanced point, but there's a tension in software in general, which is The ideal software is 1 line of code that everyone uses in the exact same way. And it's got a massive margin on it and needs no maintenance.
Speaker 2
55:07
But in practice, in order to get a second customer, you usually need a second line of code and the thousand customer needs another line of code. And If you're chasing an actual, trying to grow a business, the more customers you're trying to attract, usually the more, at least settings and preferences that you have to add so that you work a different way for enterprise than you do for startup or whatever permissions, all that shit, which means the product just keeps getting bigger and bigger. And if you want to have a large market, you generally tend to have to adopt loads of different styles of workflows. And that means loads of different code, and that means more complicated UI.
Speaker 2
55:40
So in essence, for a product to be big and successful, it has to get worse. It could get worse for any individual, but better from a market capture point of view. So you're constantly trying to find a sweet spot along the collision of those 2 lines. How big is the market and how simple can the product be?
Speaker 2
55:55
And what you'll find is to maintain a large product, as in a product with a large total addressable market, you usually have to have a lot of software. And bearing in mind, as we just agreed, that software is both not protected by any mode, and it's going to age out pretty badly, pretty quickly. So you have this tension of the need to continually reinvest. And now we reinvest lots because it turns out to go for a large market, we've built a shitload of software.
Speaker 2
56:21
And that is just a difference. This bottle of Coke is the same 1 that Barack Obama would drink and the same 1 that Bill Gates would drink and I'm drinking it and some random industry would drink it too. That's not the case in software. If we want to say, Hey, we do ticket tracking or we do customer support or ever the YC startup, the 50 person company, the 505, 000 or the 50, 000 there.
Speaker 2
56:40
It's a whole different ball game at every level. Yet, if you say we want to be it for number 1 for the entire industry. That's a lot of if statements to deal with all those workflows and every 1 of those if statements has a team of engineers and PMs and designers behind it. So you have that tension of market size versus product quality, and it's a hard puzzle to solve.
Speaker 2
56:59
And again, there are other industries in this case, Coca-Cola or whatever, where they don't actually have to pay that game. So that's my other ding on software to invalidate my own career.
Speaker 3
57:07
It's an amazing, amazing answer and list. Does it stand to reason then that the best software ever is Bitcoin?
Speaker 2
57:14
Oh, possibly. I actually, there are a few software products I've seen to give you a shitty example, just say like the notes app on iOS, where actually I know mega core people and I know aged old grandfathers or whatever, and they all use notes. So there are some products that cut through and just say we're 1 thing and we're that 1 thing for everyone and you all like it.
Speaker 2
57:33
And I think that's really, really cool. Another example of what we say bear.app, just a really nice note-taking tool that I use. Where they've just 1 product for everyone, but everyone uses the product and everyone uses it roughly the same way. So you get a lot closer to the idea of liquid profit because they just need to build 1 thing for 1 person and it works really, really well.
Speaker 3
57:49
Speaker 2 Maybe Craigslist belongs on the list or something. Speaker
Speaker 1
57:52
1
Speaker 2
57:53
Yeah, for sure. I think there's a lot to be said about something like Craigslist in that regard and they dodged a lot of bullets by keeping it simple. And the flip side is imagine if Craigslist had VCs, imagine the ways in which it would have gone wrong because they would have been pushed for growth, pushed for growth.
Speaker 2
58:08
They would have added features. They would have gone through pointless redesigns and rebrands and all that. And they probably would have honestly lost their way somehow. So I can't comment on Bitcoin specifically.
Speaker 2
58:15
I always feel that I'm undergunned in terms of knowledge, but I do think a simple thing to load through, we'll use the same way is a really nice place to be. But I can't help but feel the wolves would be constantly at the door trying to copy exactly your thing because it looks so easy.
Speaker 3
58:28
Well, what's interesting is when you think about the different counter examples, Maybe Twitter's an interesting 1 here, not that it's ever been a great business, but it's more or less been the same. The product velocity is not high, so the value is the network, or maybe the other examples would be vertical market operating system software, where the software also sucks there, But the data store that gets dumped into this thing is so valuable that no 1 bothers to switch because my whole business is built on this thing. So maybe your point is right about, you know, all software is just the database and some UI on top.
Speaker 3
59:01
And you should think more about the database either as a network of people or critical information that a business is storing inside of it. And that's it. You just invest in 1 of those 2 things and avoid everything else. Avoid workflow, avoid nice UIs like you said.
Speaker 3
59:14
It's a fascinating question.
Speaker 2
59:16
It is, It's unfortunately when we have to spend our lives trying to wrestle with, but yeah, I don't want the wrong way to parse all this is guys thinks UI doesn't matter. It absolutely is not the case. It just does not think neat UI is a sustainable position and to make it sustainable, you have to be maniacally investing in it all the time to keep it as the Michelin star project management app or whatever it is in order to stay at the top.
Speaker 3
59:37
You have other worldviews that would be spiciest at a dinner table conversation or something to rile people up.
Speaker 2
59:43
Geez. Yeah. I have a whole folder of things that Des can't tweet.
Speaker 3
59:46
Give us 1 or 2.
Speaker 2
59:48
Here's a hot take right now that I've seen a lot of my own portfolio go true. And it's not a positive 1, but I'm sure you see a lot of investment reports that go out to say we're a series A or series B we've managed burn. We've executed a riff.
Speaker 2
01:00:01
We now have 74 months of runway and we feel really good about that. And what I translate that to is I am going to piss away 6 years and 2 months of the best productive years of my entire career, pushing this boat up a hill to see if it gets to the top and it's not going to. And honestly, I want to reach out to the founders and connect to them on a human level to say, Hey, neither I nor I don't think many of your VCs who are actually good humans want to see you do that. I think you should set yourself a time limit of like 6 months or a year to get this thing growing.
Speaker 2
01:00:35
And if it doesn't wrap it up, it's not because I want the money back. It really isn't. It's just, it's such a waste of human capital. I think what happens in an existential crisis is people try to preserve their life.
Speaker 2
01:00:46
And you have to realize your life is not your startup's life. And there's a line between them. And the worst thing that happens to you, isn't it? You go out of business.
Speaker 2
01:00:53
The worst thing is that you stay in business, banging your head off a wall.
Speaker 3
01:00:57
Waste your time in your life.
Speaker 2
01:00:59
And in both cases, I'm not seeing my money back, but at least 1 of them, you're not emptying the best years of your career. And I think that's probably 1 where even as the second now, it sounds still too blunt, but the point I don't think is talked about enough right now. There are a lot of companies that are dead by definition based on their last round or their last 2 rounds valuations.
Speaker 2
01:01:17
And honestly, I don't think that suits anyone. I think even from a venture capital perspective, I think they'd rather take the scraps or the pennies on a dollar back, they'd probably back to founder again, just under a different market conditions, 2021 was a hell of a drug. We all lost a run of ourselves and that's okay. You don't have to pay the full price of that.
Speaker 2
01:01:34
The last thing I always say to people is you have to measure the ROI of the time you spend based on the future expected value of it. So go on a rager with your friends and wake up hungover and that's fine. As long as you feel like those friendships and those memories are going to be useful to you in the future. Great.
Speaker 2
01:01:50
Similarly, whatever you're doing with your business, think about it from a point of view of when you're like 50 or take your age and double it or whatever, will you value the things you've been doing right now? A lot of this is my way to get people to stop scrolling Instagram or TikTok or whatever, because this is not a useful use of time. But I think it's quite easy. I say this because I probably regret most of my 20s in this regard, but it's quite easy to go through life just too much in the moment.
Speaker 2
01:02:12
So you forget to actually think about the compound interest of the time you're spending. And I think by the time you realize that that shit matters, you're like 30. And I'm haunted by this quote that is, I read it somewhere, inside every 8 year old is an 18 year old wondering what the fuck just happened. And I think that's something that haunts my mind.
Speaker 2
01:02:30
Yeah, it is scary because I'm 42. So I'm already starting to wonder what happened.
Speaker 3
01:02:35
I've so enjoyed talking to you from the early days of when you were conceptualizing FIN, and it's been such an interesting example for me to watch of really talented team and company approaching something new, that could be this disruptive innovation story. But in many cases, yours included, I don't know, it's always different. And it's always interesting to see how companies handle this.
Speaker 3
01:02:54
I've so enjoyed our conversations in this 1 too. I asked the same traditional closing question of everybody and I'm bummed we're out of time. What is the kindest thing that anyone's ever done for you?
Speaker 2
01:03:03
I'll try and stay emotionally neutral as they tell this story. I wrote about this story on my blog, but when I was growing up, I was 1980s, Dublin, not a very rich place. My dad had left home.
Speaker 2
01:03:12
My mom was left. I'm the youngest of 7 children. And I, 3 older brothers, 3 older sisters. And in 1988, when I was 7 years old, I basically was obsessed with this computer called the Amiga 500 by Commodore.
Speaker 2
01:03:24
I don't know if you remember it. And 1 of my friends, 1 of those just seemed like the coolest thing ever. And I just wouldn't shut up about it. Clearly, I just wouldn't shut up about it.
Speaker 2
01:03:32
And I guess I was still just at that awkward age where I was too young to work out why I couldn't have 1, because basically my friend had 1, therefore it was gettable. So why didn't I have 1? And I guess my mom didn't know how to explain. Purness, 1980s Dublin divorce.
Speaker 2
01:03:45
Well, Divorce wasn't even welcome in Ireland at that time. So it was just, why is this woman not got her husband anymore? That was how Catholic Ireland would have seen it. And then 1 day for my birthday, I think it was my ninth birthday.
Speaker 2
01:03:54
I'm not even that sure. I think it was my ninth birthday. It showed up. And at the time I was really thankful, but it was only really when I was like 25 or something like that, that the actual penny dropped off.
Speaker 2
01:04:03
Like how the hell did she pull that off? And because of genuinely, I can give you a full lineal history because of the Amiga 500. I learned Amiga Workbench. I learned Amos.
Speaker 2
01:04:13
I learned how to program bits of memory. I enrolled in computer science. I met a guy, we started a blog. I met Owen, who's the CEO of Intercom.
Speaker 2
01:04:22
I met my wife at the same meetup that Owen had organized where I met him for the first time. The entire history of my career goes back to that 1 machine. I still don't really know how she got the money to get it for me, but she did. And that was the kind of thing I only hoped I could pay her back, but she passed away, unfortunately.
Speaker 2
01:04:38
So whatever costs like 700 Irish pounds, which is probably a grand or something like that in dollars, I would have loved to have now that I've got some money, I would have loved to have paid it back, but that opportunity was never presented.
Speaker 3
01:04:48
It makes me, it's really interesting to think about what equivalent thing could you do to unlock a path like that for somebody else?
Speaker 2
01:04:55
I think about that a lot.
Speaker 3
01:04:57
I think that is an incredible question to think about. And probably more often than not, it's something that may be doable. It's the thought about what it could be, and then going and doing it is a magical, wonderful, awesome story.
Speaker 3
01:05:11
I've done a lot of these, I think around 400 or something, and I haven't heard a story quite like that in response to the question. So what an awesome place to close. Thank you so much for your
Speaker 2
01:05:20
time. Thank you.
Speaker 1
01:05:21
If you enjoy this episode, check out join Colossus.com. There you'll find every episode of this podcast complete with transcripts, show notes, and resources to Keep learning.
Speaker 2
01:05:45
You
Omnivision Solutions Ltd