PODCAST

BUILD
SUCCEED

Insights for building digital products that win.
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH
NEXT EPISODE DROPPING ON AUGUST 6TH

Andrew Tunall, Embrace - From QA-Heavy to AI-Ready: Modernizing Product Delivery

Get ready to unpack what it really takes to modernize product delivery. In this episode, we sit down with Andrew Tunall, President of Embrace, to discuss how observability and reliability are more than a technical metric—they’re true business values. He dives into how AI is transforming product workflows and how organizations can rethink their approach to performance, culture, and tooling to ship faster, smarter, and with more confidence.

David DeRemer:

Hi, I am David. This is Build to Succeed from Very Good Ventures. Today we talk with Andrew Tunall, president and chief product officer at Embrace. In this episode, we learn from Andrew about his experience as a product owner as well as the critical business impact of observability and modern applications. So let's go build some knowledge. Hey Andrew. All right. Thanks for joining us today.

Andrew Tunall:

Thanks, David.

David DeRemer:

So let's get us started. Wanted to get us loose and get us going. So a lot going on in tech these days. Curious, in your space, what's maybe something that's kind of new and exciting for you and your team at the moment?

Andrew Tunall:

Yeah, well, I mean obviously the AI boom is fully upon us. Everyone's paying a lot of attention to it. I'll tell you, it's been pretty exciting to really think about how we evolve the way we work as product managers. I mean obviously my title is chief product officer. I do a lot more at Embrace in what I do. I lead most of the organisation actually, but product I've been doing product for 20 years and is very much my passion.

Inventing solutions to problems and building. And at no point in the past 20 years has it been easier for product managers to rapidly prototype and come up with ideas quickly, that they can vet with engineers. And as much as we are good as conversationalists and white borders, especially in a virtual world, there's nothing better than a really good artefact for people to go through. And that's been pretty cool from a work perspective. I'll say from a personal perspective, it's not technology at all, but I am enjoying the summers finally here in Portland. I spent a lot of the year really excited. My wife's family is from Central Canada and I'm suddenly a very big Winnipeg Jets fan, and they made it to the second round of NHL playoffs. There's nothing quite like Stanley Cup hockey, and that's been great to watch. So yeah, it's been a good year.

David DeRemer:

Totally agree. Big hockey fan myself. You got to see it live. Once you see it live, you're hooked.

Andrew Tunall:

We'll have to grab a couple of beers when I see you in New York in a couple of weeks and chat about NHL.


David DeRemer:

A 100%, absolutely. I think you mentioned a lot about product in your 20 years, in that space. I also know you kind have a unique background in terms of getting into this with your history in economics and political science. And I was just curious if you could give us the story of how you got to where you are today and maybe how some of those early experiences were formative for you.

Andrew Tunall:

Yeah, I mean, compared to people who are going directly into product management out of an MBA or a bachelor's degree in computer science today, I definitely have a non-traditional background, but 20 years ago, in the early two 2000s, and I know I look incredibly young, thank you for telling me. But 20 years ago, product management wasn't really a thing in the software world. There were project managers, there were programme managers, maybe a product owners at some companies that were just learning about Agile. Largely software was built. I mean the first gig I had was we had just done a web forms implementation on .NET and most of our software still built, that was installed software using VB.NET. And so it was very much design first, go to engineers and have them build stuff out. And I think everyone who's my age, or I'd say 30 or older, definitely had probably experienced a workplace where you had the engineering team and then you had the business.

The business would show up with somebody who had a marketing background or a pseudo technical background with ideas that were completely ungrounded in the reality of what software engineering teams could deliver. And I guess I was lucky enough to get into private industry where I had a background. My major was in political science and economics, so thinking about the broader socioeconomic impacts of policy decisions. But I minored in computer science. And so, I had this weird background where I could talk to engineers. I mean, I grew up writing code, building my own apps, building my own machines, gaining. So all my friends were software engineers, but I also understood what the business was trying to achieve. I could talk to our customers and kind of interpret their non-technical desires into something our engineering team wanted. And that was foundational and kind of like sending trajectory.

I did some product consulting a little bit after that. Eventually I joined a startup as a proper product manager. And from there, I mean the last 10 years I was at Amazon Web Services for almost four years as a technical product manager. And then I was at New Relic, where I grew from a principal PM, building their cloud observability practise through leading us at EP. And then that was really the foundation as I thought about a future executive role, to really grow into a chief product officer and president of a company. So it's been a fun journey. I'll say product management for people who join proper tech companies today, is a vastly different craft than it was 20 years ago where, and I imagine lovingly say it now, because I'm not 22 doing it, but the first companies I worked for, I was a turd polisher, honestly.

I got past a turd and my job was to turn it into gold. And one of two things happened, it stayed a turd, in which case I got blamed, or I turned it into gold and my boss took all the credit. So it's been a really interesting evolution watching people come out of college or MBA programmes or computer science programmes and actively want to be product managers and really think about the craft in a good way. And I think it's been good for the profession. I'm really excited to see what people who now grew up using some of the lack of tools they had do and they start taking advantage of AI and stuff to really accelerate how we can innovate on behalf of our customers.

David DeRemer:

Yeah, I think it's a really interesting time. I've heard people say that the domain of product manager really should be at the forefront of AI, because they have all the thinking and tool kits and skills of properly defining what something should do. And now you have these tools that if you know how to define what something should do, you have these tools that can really help you get it there, which is really exciting for the craft and a profession.

Andrew Tunall:

Yeah, I would say not only product managers either, like designers, and one of the interesting things to me is the startup cost for us building a new app or a new idea was so intense historically, where you wanted to build a new API for your users, your engineers had to go spend weeks setting up a MVC service, all the routes, et cetera. Now that's minutes or hours for them to go do all the infrastructure work, that it really means we now have more time as human beings to really invest around the developer experience if you're building APIs, or the user experience and usability tools, you can go rapid prototype five or 10 different options and actually think about that data exploration in a way that you can build much better software, not just build software, which was traditionally a challenge. Now build better software, which is pretty exciting.

David DeRemer:

Yeah, that's awesome. So let's take us through a little bit more of your story, because we got to catch up to where you are today and background of economics and political science into AWS New Relic and getting into product making that choice. And now you're at Embrace and maybe take us through, maybe give us an intro of what Embrace is and maybe back to the early vision of Embrace and catch us up to where you are today.

Andrew Tunall:

Yeah, we'll say what we are today and then I'll go back in time. I mean, our product today is an observability platform focused on end users that helps mobile and web engineers basically build yet better user performance. Performance, from how does your app actually function? The company's about eight years old and it was founded by a team that had built consumer mobile apps primarily in the gaming space. And it was really born out of their frustration of the things we've all experienced, and I think some teams very much still experienced today, which was an end user, a relative. The founder himself would be playing a game or using an app that was in their portfolio and experience a problem and immediately ask the engineering teams what happened. Now, sure, it's a founder one-off problem, but we get support tickets all the time where people are like, what happened, and was frustrated that too often the answer was cannot reproduce or we don't know, we don't have the data.

Everything looks fine from the metrics we see. And so, his kind of personal lived experience at which he suspected many users were experiencing, wasn't matching the data sets they had. And so, he kind of endeavoured to build this product that was helping, we call it's one of our core features today, but I kind of call it a play-by-play. It's very analogous to if you're going to a box score for an NHL game, you click the play by play button and ESPN, you'll see a text-based reproduction of everything that they recorded that happened in the game. And that's the notion of what he foresaw. Can we build the play-by-play of you as an end user, using whatever app you are using? And by collecting all of that data, can we then infer the state you are in, to therefore build patterns that help us understand the underlying problem you experienced and therefore resolve it faster.

And he didn't use the term at the time, but in itself that is observability, which is collecting all of the outputs, the observable outputs of a system, so that you can infer state for a particular system or a user, and therefore understand what the overall problem that you're, and explore the overall problem that you're trying to solve. And so we've been through a number of evolutions. I think six, eight years ago, it wasn't wholly unreasonable to have a craft reporter that didn't record everything he needed. I think broadly speaking, a lot of that stuff are solved problems now.

And a lot of the things we're helping customers solve now are very nuanced challenges with app startup time or render performance when they're processing different JSON payloads or in the web space, as they implement various business functionality in their React code, what are the performance impacts of that and the perceived user experience, because I think we all intuitively know that performance challenges to a user, impact their engagement with your app. Most teams just don't measure it in a way that they can say what that impact is. And so, I'd argue that's a whole world of reliability problems that most user facing teams don't have very good visibility into today.

David DeRemer:

So when I think about this space, you have crash reporting, which you mentioned, and I think that was, that's sort of the default one that everyone's like, oh, when the thing crashes, we need to kind of know, so that we can fix that pretty-



Andrew Tunall:

Yeah, this is an alias to the blue screen of death, which again, I also don't think younger people know, because most people are on Windows machines nowadays, but including me, I am a 100% Mac user, which is crazy, because 15 years ago I thought I was silly. But yeah, I mean it's terrible. The app literally goes away. You lose all your state, you start from the beginning.

David DeRemer:

Yeah. And then the other one is analytics, which is like you programme in some events and you can kind of see how you-

Andrew Tunall:

Sure, yeah. You're tracking where David's going to see whether or not he's doing the thing you expect him to do.

David DeRemer:

Yeah, right. And maybe conversions, did you buy a van in a car or whatever. And then this observability reliability thing, it sounds like it's deeper where it's like, you're right, there's this tracking. Okay, well in certain conditions, when I open up the app, it takes four. We've all had that experience where you try to get the app out to do something and it's loading, you're refreshing, like what's going on, how do you know if you're an engineer, if that's a network condition, is it because the payload was super big? So that's sort of the thing that you guys are really trying to help out is make sure that the overall experience for the user, even those little small moments that, yeah, cool. Got it.

Andrew Tunall:

Yeah, and I think the way I look at it is in the early days of mobile, the companies that were first to it, kind of built these experiences where using the app was compulsory. Like if Delta tells you the only way to check in for your flight is using the mobile app, you will tolerate quite a bit of discomfort, because you really want to take that flight. But if you look at what's happening today with e-commerce brands, everyone's got these apps out here, they're pushing notifications to you, they're giving you contextual hints, they're trying to get you to engage frequently with their digital properties, because that casualness of engagement with the brand makes it more likely over time that through many successful engagements, you're likely [inaudible 00:12:28] convert. Because maybe you don't come with a presupposition that you're actively shopping for that thing, but through many explorations, you become convinced that you actually are looking for it nearby.

If you start introducing subtle performance degradation into there and it becomes painful enough for people to engage with that brand, eventually that translates to decreased conversions. And you think about the even app startup in different states, cold startup versus warm startup, if you impact your warm startup by half a second and people are kind of constantly flicking in and out, trying to do product research, and it's just long enough that it seems like the app's not working, the probability that they permanently disengage, and by that I mean close the app and you lose the state, so you've forced closed it, you're not crashing it, but you've lost the context that exists locally, or that I just background it and go do something else entirely, is actually quite high.

And so, I think we're not trying to tell an analytics story necessarily, right? We're not trying to talk to product teams or designers and telling them people didn't like the green button rather than the blue button, but we are trying to tell them that when people follow the path that you've designed for them, they're encountering all of these types of friction points that are due to performance, that your failure to prioritise is going to ultimately impact your end business [inaudible 00:13:46].

I've been kind of using, I'll try it out on you and see how it kind of links with you, but if you think about, obviously they're not API's, because they're not consumed by an applications, they're consumed by humans, but if you think about the API's for your app, they are not slash resource. They are create an account, login, add item to cart, checkout, like view order. There are the things that your app does, and while you're experimenting and evolving those, and especially in the age of AI, evolving them even more rapidly, you don't have a handle on whether or not you are doing harm or impacting people's journey through those API's, through latency and errors. There's a whole world of reliability, things you probably don't know about.

David DeRemer:

Yeah, no, you're right. A lot of people are maybe not thinking about that level of detail. There's still a lot of addressable market for this, because it's a subtle thing and everyone has that experience where you're waiting for it to reload. Or maybe another case here, like in the example you were giving that led to this insight in the beginning, it's like it crashed. What happened? I think the other thing that happens is when you get a spinner, it's loading. You're like, oh, internet. The internet must be bad. And it's like, well, maybe not.

Andrew Tunall:

Maybe not, right. And to you, the funny thing is to a human being, that feels like a me problem and semi-permanent. Like oh, I must be in a dead spot. And so, I'll just course close that app and go do something else. And think about it in a, if you're Uber Eats or you're a fast casual restaurant like Chipotle or you're a clothing brand that's running a one-hour sale for it's most frequent customers. You hit people with a push notification and then for some reason there's a performance aspect on your web that makes everyone believe that they're in bad internet connectivity. You've just paid some vendor a whole lot of money to push out a whole bunch of pushes to your population. You're running marketing campaigns, you've built it out in your CMS to completely flush it down the toilet, because you've created disengagement friction for all your users. And I guess I would pause it and most teams don't have that level of visibility, so we think it's a huge opportunity.

David DeRemer:

Yeah. So it sounds like there's a big shift here where in terms of how organisations should think about observability, when you think about reliability, of course everyone wants their app to be bug free. You want it to be high performance. People talk about these things, but what you're really highlighting is that these things aren't just a thing that's nice to have. This isn't just developers trying to create really good stuff. This is real business value.

Andrew Tunall:

Yeah, real business value.

David DeRemer:

Yeah.

Andrew Tunall:

Yeah. I'd even say as you think about how your teams function from a software delivery standpoint, even if you take the end user effect out of the equation, and Charity talks about this quite a bit, Charity Majors, one of the co-founders of Honeycomb who really was forward-thinking in how you use her technology platform to go answer unknown unknowns. I think if you view all the telemetry and the representation of how your users are experiencing your entire system, your app, your API's, your databases, et cetera, into account, that shared context between developers building that user experience or the SDK, or the library, and the cloud-based systems that actually power it, is what leads software engineering teams to actually build really resilient, high performing excellent technology. If your front end teams have no idea what the API payload is going to look like and how to go design an SDK to properly retry or consume that data and can't have a two-way conversation about payload design, because they know how their app is going to process it, I mean it's two teams operating without entire context and at least a very predictable results.

David DeRemer:

Yeah, that's interesting. So when you implement these and you have these running in production, you have embrace going and you have this next level of observability and reliability, have you guys observed that it fundamentally changes some of the team behaviours, cultures and how they work together?

Andrew Tunall:

Yeah. So we work with a very large brand, and I guess there's publicly accessible information, so you can kind of look around where our SDK is, but they run lots of experiments, like thousands of experiments across their customer cohorts. We've had them turn off experiments and I think the team that originally adopted our product and now it's got much wider adoption, they wanted to stop being the janitors. They wanted to go from a paradigm, where they were putting the brakes on everything, because they had some tidbit of information that told them something was wrong, to a team that was instilling best practises amongst every engineer in their organisation, where they were measuring and improving and could run experimentation at scale with full confidence that they understand the after effects from a performance perspective around what's going on. It can make the right decisions for their users in business.

And that's tremendously powerful. If everyone talks about wanting to do experimentation, but do you have the right data in place? Do you have the right best practises in place? Do you have the right culture in place to do it? Because if the answer is no, having a 1,000 different teams checking in code like the wild west and shipping it within an hour with no QA, when by the way for 15 years, front ends, mobile and web, have basically operated on a QA heavy, like zero bug paradigm, not a fast and fixed faster paradigm. And observability is really fundamentally in production, is really about ship it, because you're going to have the right visibility to know if it's broken, and then you can disable the feature flag and you're good to go. Instead of like, we're going to spend 12 weeks QA'ing this and hope we caught everything, but when inevitably it breaks, we're not going to know.

David DeRemer:

No, I hear you. That's [inaudible 00:19:20].

Andrew Tunall:

And you just can't do that at scale.

David DeRemer:

Yeah, I think that with crash reporting, one of the things I've observed is people that aren't really super familiar with the ins and outs of doing these things, they want no crashes. If crash reporting is identifying something, it's like that's a problem. It's like, well, realistically, if we have a speed thing we're trying to do, we should be determining what level of error rate are we comfortable with that we can identify with these tools and fix.

Andrew Tunall:

That's been a thing that we've talked for in the digital transformation world for a long time. And you just have to live what they're going to exist. Especially in the Android ecosystem, you've got tens of thousands of potential device types from OEMs. You've got many active versions of the OS and then multiply that by the number of experiments you're running and teams actively checking and wanting to ship code and see whether it impacts user behaviour. And suddenly it's a number of dimensions you can't possibly control for in a QA environment.

The only way to do it is to see a representative, and by the way, unpredictable user behaviour, people trying to engineer their way around some objective they're trying to achieve that you didn't anticipate. And so, you take all those things to an account and it's like, well, I don't know how a zero bug QA heavy approach can ever be right for that. If your objective is to ship fast. The two are really incongruent. And I mean this isn't, I'm not saying everyone should ship fast. The world right now is gravitating toward a technology world with AI code assistance, where the pressure on every business is going to be to ship fast. And so if you don't build the right culture and tooling around it, you will fall behind.

David DeRemer:

A100%. So you're out there with a really cool product that you guys have led the product vision and are leading the team in a lot of ways and have really positioned that forward, where what you guys are doing and leading this charge of observability reliability. So let's dig into that. I'm curious, let's get back to your product ownership and your history there. Because you probably put some of these ideas to work even in your own tools along the way. How did the embrace change over the years and how did maybe the strategy change? And as a product leader, how did you sort of validate each one of those inputs or opportunities as you went?

Andrew Tunall:

Yeah, so I guess I'll rewind a little bit to my strategy around finding product market fixed. I think that's important context for product managers and product leaders and when they think about opportunities. And especially in engineering driven organisations or as engineers ourselves, we tend to think of if there's a problem and I have a solution, everyone will want it. And that's just fundamentally untrue, because chances are, especially now, there's many solutions to that problem, all of which solve it in some particular way, but all of it look the same. And so, a big book that I'm a fan of is the Purple Cow, which is really a marketing book, but is about how do you stand out in a crowd of cows that all look the same? If you're driving, and the analogy is if you're driving down the road, you see a herd of cows, there will be small cows, big cows, brown cows, striped cows, modelled cows, white cows, black cows, but what if there was a purple cow?

Amongst all those cows, the purple is the only one you're going to be like, well, that's unique. That deserves my attention. It's worth exploration. So especially as an early stage company, we have to be a purple cow. We have to solve a problem in a new and unique way for and a specific problem in a new and unique way, that deserves people's attention, because we are too small as a company to go compete with others as just a generalist solution. We can't look like the rest of others and just carve out our 10% of the market and be truly large. And I mean, we're a venture funded startup backed by NEA, like we're in the business of building a growth business. And so, just carving out a little bit of business and going about my day is not the game that I'm playing. The second bit is this notion of how do you find product market fit?

And I guess I've thought like this for a while, but I was listening to Lenny's podcast a year or so ago, and I can't remember, there's a venture fund that does a product market fit workshop, and they kind of put it into the structure that now I send to people as kind of a good resource, because it mirrors how I think about it. And it's this notion of the four P's, which is it's not just the product, but it's the persona, the problem, and the promise. Which is to say promise is your messaging, the problem is the problem you're solving. The persona is somebody who has to care about the problem and resonate with the promise, and then the product has to fit all of that. So if any one of those don't work, you don't find really radical product market fit to a degree that people are going to go adopt it and be actively raising their hands and saying, hey, this is a unique problem you solve and you alone kind of solve it and it's worth my time and energy to shift from the status quo.

So if you rewind in our history, we built a really unique kind of single feature, which was this user play by play. But then we largely fell into the trap of doing what everyone else was doing in the space. We built a really great crash reporter and sure was 30% better than what you could get at Firebase, but it was still just a crash reporter. And there were a lot of other people building crash reporters. It was a striped cow in a road full of cows, or a pasture full of cows. We built a bunch of other detection as well. It was tools in a toolbox, but it was really hard to discern what was materially different. And so, I started a couple of years ago, talking to leaders who were, they kept telling me the same story, which was our way of operating the app feels like it's just fundamentally data and everyone just goes into the toolbox and they have a couple of things they fix.

But when we look at our app store reviews, we know people are kind of having a rotten time and we don't really know why. We know that there's probably disengagement for reasons that we have no information around and we can't quantify what that looks like. And everyone just kind of goes, because there's no data other than I guess a gut feeling. And gut feelings are really hard to prioritise as product and engineering leaders, when we have an endless list of stuff that there's pressure to build. Unless that gut feeling is tremendously strong in your position of being able to leverage that political capital, especially in big organisations, retail organisations, where maybe their leadership isn't grounded traditionally in tech. And so, as we continued those conversations, I think what I started to realise is we needed to look different. We needed to look like a purple cow.

We had a series of hypotheses and we needed to go codify them in a way that made us clearly different than the rest of our competitors. And there were a couple of things happening in the larger world. One was that open telemetry, which had been really this evolution from a bunch of open standards to now the most contributed CNCF project, was gaining gravity in the larger world. We were seeing very large corporations attempt to adopt it as their standard observability language, which was really important for sharing context between application teams, infrastructure teams, everyone doing observability. And I think we realised that there was a huge opportunity in the market to really take a leadership position by firmly saying, okay, open telemetry can be a really great way that people adopt across every part of your technology organisation, including front end and mobile teams.

And by kind of applying that leadership, maybe we can then also go solve this other problem we were seeing, which is more and more technology leaders asking, can you help me understand how all these problems we kind of believe exist, correlate to whether or not a user is staying engaged with our app? And so, I think I wouldn't call it a pivot at all. I'd call it an evolution, which was we went from this disparate tool kits to how can we start putting the pieces together?

And we have some really awesome stuff launching here soon, and I'm really, really, really excited for V2 of it, where we're really starting to apply a machine learning and AI toward how do we help users really understand the things users were doing and the correlation of performance signals, to whether or not they stay engaged, which is I think as you talk about the real service level objectives for your end users, would be pretty awesome if in your reliability dashboards you could show the key things your users do in an app and whether or not you likely could correlate it to a recent change or reliability signal in your app. Because then it's like we didn't just release something users don't like. We released something that actually keeps them from liking it. And I think from a technology organisation for a reliability culture, that's huge.

David DeRemer:

Yeah, that's totally, and what you guys are doing it sounds like is uncovering this information that otherwise was sort of mysterious. And so, you can make a bunch of conclusions. Like if you ship an update and you're not getting the business metrics you expected and you could say like, oh, the feature, no one wants the feature. And it could just be like, no, no, the feature takes a second and a half too long to load and people are frustrated.

Andrew Tunall:

Yeah, I mean I think if you were to look at app load time I think is a good thing. Everyone has experienced that across every app they use. But if today every app has a unique value proposition and there's probably a gradient with which users are willing to be patient during that app load time, to experience that value proposition. For something like X or formerly Twitter, that load time is very low, because they have intentionally created a user interaction where I want instant gratification and I want it many times throughout the day. For other apps, it may be quite long, but it's unique to your context. If you impact that, do you actually know what change that has on your user's willingness to stay engaged? And I would argue almost nobody does. And so yeah, my uncovering that kind of information, I think people can just build better software, which is, I mean that's really our grander mission, right?


David DeRemer:

Yeah. So you're now the president of Embrace as well, and so you came up through product, right? And so, how has that product-driven leadership role evolved over time? Now that you're in an executive position driving that, how has your leadership style maybe morphed over time? And I'm also curious how in an organisation where the product is really in a very senior position like that, and you've been so vital to the trajectory of this, how has it influenced the engineering culture and your ability to ship what people want?

Andrew Tunall:

Yeah, I mean I'd argue especially in technology organisations that are selling to developers, having people who come in through product or engineering backgrounds as the most senior levels of leadership, is critical to them like building an authentic brand, building them a product and go-to market motion that is authentic and resonates with their user persona. I have seen companies that build really great products, hire people with sales and marketing backgrounds who have no experience in the domain as their CEO. And the culture absolutely takes a nosedive. The product authenticity is no longer there and the company result is hosed as a result.

That said, running a company at the most senior level where you're doing more than product or technology organisation is a really hard gig. I mean, we're a venture funded company. We burn negative as we endeavour to grow. My day includes everything from reviewing a redlined customer NDA to subbing in for one of our customer success people, because we have IT customer and I need to get an upgrade done to popping onto a sales call at 6:00 in the morning with a European customer, to then sitting in a roadmap planning exercise with my PMs or actually going through product details and mocks.

The context switching can be nightmarish and it's totally not for everyone. And I think a couple lessons I've learned is you have to have really great accountable, high ownership leaders around you and say, I'd say ownership is probably under indexed in many organisations, which is are there people that you surround yourself with, where you just know that they're going to get it done? And especially for a small company, those individuals are gold, because if I can hand something to somebody and know they will find a way to find a way, to produce a very high quality outcome, that's something on my plate I don't have to worry about, because I have the trust and relationship with them to know that if they started encountering a problem, would they know it's not going to meet my standard, they're going to come to me proactively and we're going to work on it as a collaborative team, instead of I'm going to look at someone and be like, this just doesn't meet my standard.

How did it happen? So yeah, I think it's been an interesting transition. It's been great for me. The reason I went from a very well compensated VP at New Relic, running a lot of product, to a much smaller company was largely because as I looked at my career arc. Being a CPO and a senior leader at a pre IPO or recent IPO company kind of necessitates you've done a lot of the gig before, even at a smaller scale. There's a lot of hard lessons you learn about managing people, about managing culture, about getting people excited, about communicating tough messages that you don't learn when you're kind of a functional middle manager even in a large organisation, you're insulated from a lot of it. And in three and a half years I've learned a lot of those lessons and they'll pay dividends from my career.

So there's lots of reasons to go to a startup. You get to learn brand new skills, you get to often do things that are well above your pay grade. You get to define your role in many ways, getting rich quick is not one of them, I will say, not one of them. People think, oh, you go to startup, you're going to be a multimillionaire. It's like most of the time, and by most of the time, I mean 99% of the time, the answer is you will not. Everyone likes to index on the outliers, because otherwise nobody would be saying enough to do.

David DeRemer:

Yeah, you're right. There's a lot of management lessons and sort of entrepreneurial lessons you just rattled off in the last two minutes or so. Totally agree with every single one of them, by the way, because I've gone through that same stuff as well, and just the lessons you learn, it's hard to explain. You don't really learn them unless you really can experience them, and it's just a truth, it's just not for everyone. And I think the startup world does kind of really put the successful startup founders on a pedestal.

And you see, even you look at the people who are really successful now, the true, like the Jeff Bezos [inaudible 00:33:57], these guys who are billionaires, hundreds of billions, but they've also been doing it for a long time at this point. And the amount of stress and things. And for every one of those, there's thousands of people that we had a recording of this with our VC from Celesta, and I asked him about what surprised him about being a VC, and he is like, "Actually, what most people don't know is actually what I do is spend most of my time dealing with all the companies that aren't going well. The ones that are going well, we don't talk about."


So yeah, so many of them don't go well, and so it is really tough. But maybe that's a good segue to a question around, given everything you've learned in your product journey and in your startup journey and everything, if you were to go back in time like five years ago, what would you tell yourself then that you think in retrospect, you would've maybe kind of saved yourself a lot of stress?

Andrew Tunall:

So five years ago, when I was still in New Relic, I mean, first of all, I didn't know New Relic would be acquired by a private equity firm at the share price it was, so I probably should have stayed for my own pocketbook, but you can always make more money. What you can't do is learn some of the lessons. I think overall from a career perspective, the move was really good for me and my partner in crime, Eric, who primarily deals with our strategic investors, he's one of the co-founders, strategic customers, et cetera. He's been an incredible partner and taught me a tonne.

I think thinking about, and these are probably most relevant to VC funded startups, we had raised a fair amount of money. I think the 2020, 2021, 2022 world is not indicative of the rest of history in that the spending mood of potential customers and the pressures from the BC community for companies to grow at all costs, where unlike anything that really could ever exist again in the future or really should. I think taking a, I think if you go back in time, sometimes it's like rather than hire 50 people, maybe I hire 10, who just have outstanding high ownership, who have outstanding contacts in the industry, who are just absolute top performers and we build smaller and better and think about how we can just be, we can over index on just batting above our weight before we grow.


I think a lot of companies make the mistake, and this happens at every level. I saw this at Amazon, you see it elsewhere, where they have something going good and then they grow to three or four or five times their size and suddenly the culture that they had and all of the things that were really relying upon them having a very high bar internally and talent level and people who were executing well above their weight, it goes out the window and suddenly the direction starts to tilt more toward mediocrity than excellence. So that's probably the number one, which is just now knowing the larger economic uncertainty, the decline of the crazy [inaudible 00:37:01] multiples and people buying [inaudible 00:37:02] software, I probably would've said before we grow crazy or double down on what we think is working, how could we go from I convince myself to the world is telling me, because I can't say no anymore. If people are literally, I remember and I wasn't there, but I've heard the stories, right after New Relic went public in 2012 or early 2013, they literally could not say no to business.


There were so many people showing up, asking them just for order forms, that reps were blowing their numbers and not working 45 days out of a quarter, because they were closing business so fast. I'm not saying you should wait that long to grow your sales team or expand your product, but certainly they had what I would call radical product market fit at that point. There were no questions that you could go add people to that organisation and make more money. I don't think most companies that try to endeavour through growing to add counter at that point. So yeah, I mean a lot of it's figure out how you absolutely have product market fit and it's not you convincing yourself, it's the rest of the world telling you, because they're banging at your door, they're in bounding at such a level where now you have to rise to meet the occasion rather than find the occasion to meet your growing team.




David DeRemer:

Yeah, love that. Find that product market fit. And it sounds like you're also, there's elements of make sure you are paying attention and optimising the team and the people you got around you and maintaining that pace. And I totally agree. It's like when you're moving really fast, you hire great people, but sometimes great people are not still the right people. And I think that's tough when you're moving fast and those are lessons learned as you go forward. So wrap us up maybe by just doing a quick, given all that, you mentioned AI a couple of times and I would imagine AI's got to be something you guys are thinking about. You have a lot of data and stuff go the other way. What's the future hold do you think, for Embrace?

Andrew Tunall:

Yeah, I kind of alluded to we're really starting to do some stuff around how we build AI, that can be assistive in you defining the boundaries of the things you want to look like and helping you understand the data. I definitely, I'm excited about how we can employ AI to do things that were time costly, but not necessarily super high leverage from a human perspective, increasingly. And that's things like that we know things are clustered around a common time period, we can kind of place them on an affinity map. Is there a way we can train a model to say, hey, these things all kind of make sense together. Tell me what this step looks like. And that helps us define data shapes. I have quite a bit of scepticism that some of the things that are, we are broadly salaried employees, and so there's a relatively fixed cost to us for an organisation.


I have some scepticism that right now the cost model of AI is a perfect fit for replicating all the things that our human brains are pretty good at doing around pattern recognition at scale. And that's simply because I've seen, I mean I've seen it, it's pretty easy for enterprises to run up massive bills in a hurry, doing things that if you employ the human to do, they probably could do. It just might take marginally longer. And I think historically, like our world, the observability world, and I had a friend who was an engineer say this, he's like, "I wanted to set up New Relic and then just ignore it as much as possible. It was kind of an insurance policy that I hope never had to pay out." I have a different take, which is that I think great observability tech is something that ideally you're interacting with every day, because you're using it as the control panel around what's going on in your software and you're using it to constantly experiment and tweak.


And if you shift to that paradigm, then the level of emergency should be far less insane. The house isn't burning down, it's just a spark that maybe not could turn into a fire, but I want to address, because I care about my overall quality. And in that, we can't have AI costing us $15,000 whenever there's something, little thing is if the house isn't burning right? Now, I could be totally wrong. They could figure out a way to decrease by tenfold or hundredfold the cost of AI.

But right now, we did some early experimentation with taking the play-by-plays that we have users going through, placing them in a tokenized LLM, having to do comparisons, and it's like, well, something that if I go click, click, click, click, click, I instantly saw that just cost me $15. And if I have a 1,000 engineers in an organisation doing that 10 times a day, the numbers start to add up pretty quick. And so, I think very much TVD on it. I am receptive to the idea. I would love to see it. I think it's a pretty cool idea, but as a very pragmatic person, you obviously can't just be technology can do this. It's like is it the right thing for technology to do, given the problems that your customers that actually prioritise and their willingness to pay for it.

David DeRemer:

Yeah, that's really insightful and you kind of unpacked something that's interesting about this AI boom, which is most of the consumer tools that people are used to engaging with, like ChatGPT, Gemini, they're either free or very inexpensive in terms of the consumer version. But if you're a business building with it, you can very easily rack up a very large build very quickly. And I think the general public doesn't really see that. So these expectations of like, oh, whatever, just hook it up to your AI and it's just going to do this stuff. Right? Well, hold up. It's one of those things, even fixing bugs in observability, it's like sometimes you get a thing and you're like, well, that only happened to one out of a 100,000 users. So it's kind of not even worth fixing that, kind of a-

Andrew Tunall:

Especially in mobile by the way, because if it's based on an outdated supposition, by the time you fix it, given its lack of prevalence, you may have a new operating system, a new app version that totally wipes off the feature, et cetera, right? So yeah, I mean, why would you employ something that's going to cost you a couple hundred dollars to do that at scale when it may not even be relevant anymore?

David DeRemer:

Yeah. Well, and that's where the AI thing could get really out of control, if you enable all these agents to just do all this stuff. So awesome conversation. Embrace, if people wanted to learn more about your product, if they're hiring, how would they find you and Embrace?

Andrew Tunall:

Yeah, so we're at www.embrace.io. Don't just Google Embrace, because you won't find us, because it's another lesson learned. But that was before my time. Name your company something where when people google it, they can always find it, so it goes, yeah, those times are fast. They can also find me on, my name's really unique, so if you search me, Andrew Tunall, on LinkedIn, you will definitely find me. I'm the only one. That's good and bad. I guess I'm on Instagram, I am technically on Twitter, but every day I'm threatening to leave. Yeah, so LinkedIn is probably a great way to connect with me.

David DeRemer:

Awesome. Cool. Well, there you go. If people are interested in this, should definitely check you out, because you guys have an amazing tool and very thought leaders in this space and moving the needle, so.

Andrew Tunall:

Thanks a lot. [inaudible 00:44:08]. It's been great working with the VJB team too. I'm excited to see what you guys have next.

David DeRemer:

Awesome.

Andrew Tunall:

Thanks. All right, cheers.

David DeRemer:

Thank you for joining us on Build to Succeed, a Very Good Ventures podcast. We hope you enjoy exploring the experiences and insights of leaders that have built successful digital products. Please take a moment to leave us a review and don't forget to subscribe to get our latest episodes. Thanks again and see you next time.

Previous Episode
Next Episode
View All Episodes