Scott Clark Transcript
Clint Betts
Scott, thank you so much for coming on the show. I'm super impressed with your background and what you're building with Distributional, and you did SigOpt prior to that, which is really, really fascinating, and you had that acquired in 2020. Tell us your background and how you became the co-founder and CEO of Distributional.
Scott Clark
Excellent. Well, first of all, thanks for having me on. I'm really honored to be here. I'm really excited to chat today. Yeah, I can tell you about how... I can sum up the last 20 years of my life in a few minutes for you here, but I would love to dive deeper wherever it's interesting. So, as you mentioned, SigOpt was my first company. Started that after completing my Ph.D., and it was kind of a culmination of some of that work and focused mostly on how do you optimize AI systems back when people were actually building their own models instead of using foundational models.
So, a combination of Bayesian optimization methods was used. Some of these were very cutting-edge. We published a lot at NeurIPS and ICML, and it was built upon some of my Ph.D. thesis work to help people do hyperparameter optimization and neural architecture search. Basically, how do you tune all the different knobs and levers that go into how the model is actually trained and learns. This is, of course, the exact wrong way to start a company where you have something... some cool math, and you're like, "Maybe I can build a product, and maybe I can find a market."
But we were able to make a big dent in the space and ended up working with a lot of what I would call very sophisticated modeling teams, people who had big, expensive models that they needed to make a little bit better. Working with large streaming services and trillions of dollars worth of hedge funds in the US intelligence community and credit card firms, and basically, anyone who had something that was important enough that it needed to be fully optimized.
I ended up selling that company, as you mentioned before, to Intel in 2020, where I ended up leading the AI and HPC division of their Super Computing Group. So again, very big, expensive models are now being deployed. I'm a very big, expensive supercomputer. Basically, over the course of those 10 years that I was taking that thesis and helping firms optimize AI models, I slowly came to the realization that the thing that was really preventing people from getting a ton of value out of AI wasn't performance. It wasn't about how to over-optimize something.
How do I overfit some eval metric by another half a percent? It was really fear of the downside, not trying to squeeze that last little bit of the upside. And so time and time again, we'd get questions about, "Great, you optimized it. How did you break it by optimizing it? Is this still robust? Did you make it more brittle?" All these sorts of things. And I saw this in the SigOpt days. I saw it in the Intel days. I saw it just time and time again. And so, a little bit of a slow learner after hearing this question for a decade, I was like, "Okay, maybe I'm actually solving the wrong problem here."
And that's really where the idea for Distributional came into being: the thing that's preventing enterprise value isn't performance. It's confidence. Confidence is about consistent and reliable behavior, and performance is an aspect of that. You want consistent performance, you want reliable performance, you want good performance, but behavior itself is more than just performance. It's not just the end result, but it's how you get there. It's what you do. Sometimes, behavior can give you information about how performance might be shifting or changing well ahead of that performance, which is actually degrading.
And so the idea was, can we take a lot of what we learned basically by helping people build evals, helping people do optimization, helping people build more performant models to now help them build more confidence in their models? And that's kind of the crux of Distributional is doing that through an enterprise testing platform, using some of the same tricks and techniques that you would use to gain confidence with traditional software, but applying it in this more nondetermined, deterministic, non-stationary kind of extremely complex world of AI.
Clint Betts
Yeah. How do you do it in the world of AI? So where... How do you find that balance? I guess that is my question. Part of AI is that it's meant to be creative. It's meant to kind of come up with stuff on its own. And so, how do you find that balance between the two?
Scott Clark
Yeah, that's a great question because you don't want to necessarily put it in a box that's so constrained that it basically looks like traditional software. But it's a double-edged sword that nondeterminism, getting a different answer for the same question, could actually lead to really interesting and emergent behavior.
The non-stationary, the fact that these models are evolving and changing underneath you actually sometimes helps them get better. And so I think the tradeoff is being able to understand what that behavior is, what it looks like in a holistic way, being able to detect when that behavior changes, and then being able to quickly triage like, was this good? Was this bad? Do I care about this or not?
But it ends up being fundamentally different than traditional software where you want it to do the exact same thing. And whenever it doesn't do that, you find a bug. AI testing is more about trying to determine whether you're getting unwanted behavior. So first, you need to detect change, triage, change, and then ultimately resolve change. But it is a little bit more of an unsupervised sense of doing testing.
Clint Betts
OpenAI has really made AI mainstream. Everybody's talked about ever since ChatGPT, and they're not the only ones, obviously, but ChatGPT really kind of put this on everyone's radar and had everybody freaking out.
Obviously, AI has been around since the 50s, but now it's like, "Hey, what are we going to do about this thing?" And I know that this question is probably unanswerable, but where do you think we're going with AI? Where does this all go? Over the next five to 10 years, what are you planning for? What are you expecting?
Scott Clark
I'm excited about the surface area of types of problems that it can solve, which is just continuing to increase. And so you go back to the 50s, and that original summer symposium or whatever it may be, and I'd say they had a grand vision back then, and it's really interesting to go back and look at some of the things where it was like, "Man if we can do spell check, that's AI." And that was largely solved in the 90s and things like that. And what AI is always like, "What can we not do super well today?"
But one thing that has been completely true, I think, over the years is the surface area of the types of problems that it can solve. So can it do a spell check, is an example? But can it make a prediction, a binary prediction of just true or false? Okay, can it make more continuous predictions? Can it be able to detect things? As we start to layer on these systems, and this is where I think what's now being coined as agentic systems of compositions of systems and things like that, the surface area is only expanding because the foundation is getting stronger. We're having more of these indigible components.
They can do more things. They're getting more performant at each of the things that they're doing. And then when you combine all of those things together, you get self-driving cars, you get personal assistance, you get all of these things that are built upon technology that's been around a long time, but combined in interesting ways and made better through some of the leapfrogs in technology, both software and hardware that has... have happened recently. So I just think we'll see it be more pervasive and more performant, but that only increases the need for us to have more confidence in it.
Clint Betts
Yeah, the problem you're trying to solve is really interesting, especially around confidence and whether you can trust what's being spit out and all of that type of stuff while also letting it be creative.
Again, you have this really interesting; like you said, a double-edged sword there to make it exactly right down the middle is interesting. What are some interesting use cases that you're seeing beyond what you guys are working on for AI and interesting companies or things that you're seeing that are getting funded that get you excited?
Scott Clark
Yeah. I mean, there's probably not enough time to go through all of it, to be completely honest. It seems like every day, someone is coming up with some new approach, some new thing to attack with it.
I'd say one of the things that we're seeing more and more is as people start to feel more comfortable with this next wave of gen AI, they're getting beyond those science projects and those like [inaudible 00:09:20], "I wonder if we can just have it do QA on our internal documentation or I wonder if we can help it answer HR questions just for our internal employees.
Once they're kind of getting that feel for it and starting to build confidence, in all honesty, now they're starting to say, "Okay, how do we use this to actually transform the business? What if we use it to not just take a half a step, take a full step, take two steps, takes three steps, whatever it may be?"
And we're seeing more and more Fortune 500 global 2000 enterprises start to embrace this idea of, "It's not enough just to do it and check the box saying we did it. It's how do we actually create value?" And I think that's always been the issue, even though the last wave of AI and machine learning and data science before it. It's a really cool technology, and it's novel, but it always comes back to how this actually moves the ball forward.
Clint Betts
Yeah, yeah. And then, as you... you just raised a Series A, right-
Scott Clark
Yeah, yeah, yeah.
Clint Betts
... with Andreessen as the lead, which is a huge [inaudible 00:10:24]-
Scott Clark
So, actually, it was Two Sigma Ventures for the Series A. Andreessen led our seed.
Clint Betts
Andreessen led the seed. Yeah, yeah, yeah.
Scott Clark
And participated in [inaudible 00:10:30].
Clint Betts
Yeah. Tell me, what's it been like raising money? And maybe a broader question there is, I mean, you've done this once before, right? You've gone through this process. What have you learned, and what have you taken there to avoid some pitfalls from the first time?
Scott Clark
Yeah, I'd say I made a lot of mistakes the first time around, and so the joy of doing it the second time is you can learn from those mistakes and make all new mistakes. But one of the nice things about fundraising, especially the second time around, is we'd been able to build up some of these relationships over many years.
Andreessen Horowitz was one of the lead investors in SigOpt, and I got to work with Martin Casado right as he was joining Andreessen Horowitz. He was on our board for many years, so he got to see what things looked like when they were going well. They were going poorly when we were selling the company through the pandemic, all of these sorts of things.
So it made it really easy when I was going to start the next company to say, "Hey, do you want to [inaudible 00:11:32] back? Do you want to try it again? Now that we've learned all of these different things together and built up all of this trust and confidence in each other, we attacked this adjacent problem.
It's not a completely dissimilar problem, but it's really taking everything that we learned from the last decade and seeing if we can go even further." And similarly, Two Sigma. They were a big customer of SigOpt and an investor in SigOpt, and one of these firms that just knows how we do business, what we build, et cetera. And so I think a lot of it is that we get to stand on our own shoulders to jump forward here.
Clint Betts
How much did you raise in Series A? Like the seed was 19, is that right?
Scott Clark
The seed was 11, Series A was 19, so 30 total.
Clint Betts
Yes. Okay.
Scott Clark
One big difference between this and the last company was, I think, it took us four or five years to raise a Series A after going through Y Combinator and the SigOpt days. Here, we raised the seed right out of the gate, started with a team of 11, and then raised the Series A within the first year. And so, again, making all new mistakes but 10 times faster.
Clint Betts
What do you make of those numbers being seed and Series A rounds? Those used to be like Series D rounds, Series C, like that type of thing. How do you think that's changing, and why do you think that's changing?
Scott Clark
Yeah, that's a great question. I remember when we were on a demo day of a winter '15 Y Combinator, we raised a $2.2 million seed, and that was considered a mango seed, like a large seed. And now that would be considered an extremely small seed. So I think a few things have happened. One is, I mean, some businesses have pretty large capital constraints, and so they need to be able to do that, and those companies are raising like 100 million dollar seeds.
But I'd say expectations have continued to ratchet up. What can you do when? When do you need to start making money? When do you need to start being profitable? How large of a team do you need to do to really make a splash in this environment? And so, as those expectations have increased, you need to be able to put capital to work to be able to meet those expectations. But I do look back and think when I was just a dumb grad student trying to figure out how to start a company for the first time, I wouldn't have known what to do with $30 million.
And so I really feel for some of these first-time founders that now have these raised expectations, have the capital because that's become the standard, but then maybe haven't deployed that before or been able to build a team or scale it. And it just exacerbates all of the different things that you need to get right. A startup is about getting a hundred things right or at least not getting them wrong. The idea is part of it, and the go-to-market is part of it, but there are a million other ways to die.
Clint Betts
What does a typical day look like for you?
Scott Clark
That's a good question. Every day is a little bit different, but when I'm at home and not on the road, I have two young sons, so it's helping get my two-year-old and six-year-old out the door and ready for school. And then a massive context switches into trying to figure out what I'm doing that day, whether it's podcast interviews like this, whether it's customer calls, whether it's diving deep on go-to-market, product, operations, all the various parts of the company.
Thankfully, I have an incredible team. This is one of the things that we were able to build upon along with the investors and some of our customers from the previous companies, being able to bring in a lot of people who had worked with us before and then their network. And again, having a strong culture. But just figuring out how I can get out of the way is literally every minute of the day.
We have a lot of really smart people, and I just need to figure out how to get out of the way as much as I can and then tell our story. And every day, I try to do that a little bit better. Every little... Every 1% gain towards that end helps. And if you can get 71% gains cumulatively, that's a doubling in productivity. So, if I can pull one of those out a week, I'm happy.
Clint Betts
How have you thought about recruiting? I mean, you mentioned a lot of the folks who are coming in here now are... you've worked with in the past. How do you develop and find talent in this new kind of world where some... I don't know where you're at, and this may be part of the question: if you're all in the office, hybrid, all remote, or things like that, how do you go about it? It must be different recruiting than it was the first time.
Scott Clark
Yeah, definitely. And I think again, there's a handful of things that have changed, so it's hard to kind of isolate it to one specific change. However, one is that we have more experience with it, and we were able to build upon the success that we had before. So it's not again that I'm a dumb grad student who's figuring out how to build a team for the first time. It's like we have better systems in place so we can attract, I'd say, more experienced talent who's looking to just run fast as opposed to figure it out. Figuring it out is very valuable, and a startup is a great way to learn really fast because it's drinking out of a fire hose, but at this point, it feels like we're trying to assemble the actual fire team. And so that's been a big difference. I'd also say the market has shifted pretty dramatically, whereas five, six years ago, we would... we'd, unfortunately, get into wars with Google and Facebook or hedge funds and things like that, and we had to really pitch people on the vision of, "Well, you get to work with these teams because they're our customers, but at the end of the day, we can't match them in certain ways because the bidding wars we're getting insane."
And now that some of that is retracted, some of this idea of, "Well, as long as you go to a FAANG company, you're set for life. They'll never do layoffs. The price will always go up," it is created a little bit more liquidity in the market, which I think is good for everyone, but allows us to attract talent that wants to be able to do really good things, wants to learn a lot, wants to move really quickly, but then we can also be a lot more competitive because some of the market's retracted a bit in other ways. But we rely very heavily on the network of people who are already in the company. A lot of them came from SigOpt, but at this point, I think less than half of the team is from SigOpt.
And so everybody knows someone. Everybody has gone on and done something. Some of the people who were at SigOpt and rejoined this company had gone on to form their own YC company, and then now we're back with us or gone on to work at Google or Facebook or whatever and now have come back. Cross-pollination helps with just better ideas, but it also brings in a much, much broader network that we're able to leverage, and investors can help with that as well. We get a lot of referrals from people who are excited about the idea of not only the quality of our investors but also second-time founders doing it again.
Clint Betts
What's it like in Silicon Valley now? I know there was this exodus or supposed exodus to places like Miami or Austin and these types of places. It still seems like there's no better place in the entire world to have a... run a startup, particularly if you're an AI. It's got to be the epicenter of that. But how have you kind of managed the ups and downs of Silicon Valley itself?
Scott Clark
Yeah, I guess I was part of that exodus to a certain extent. After selling the company, the first company, SigOpt, to Intel in 2020, I moved up to Portland, Oregon, which is where I grew up. And it was the middle of the pandemic, so it was like, I've got two young kids, might as well be close to family and that sort of thing. But like you said, Silicon Valley is the epicenter, so no matter how much you try to get away, it pulls you back in. And so moved back down to Palo Alto about four weeks ago, actually.
Clint Betts
Oh, really?
Scott Clark
Yeah. So, it's been an intense couple of months. But I'd say one interesting shift that we've seen is while it is still very much, I'd say, the epicenter, and there's a lot of really great stuff happening, there's a lot of... This is obviously where the investors are. There's a lot of events. There's a lot of things going on. Talent is more dispersed now. And so we are this... we are a mostly remote company.
We do have hubs and small offices both in the Bay Area in New York and Toronto, and it's really about having the right people around the table. And those people have a lot more options. They have a lot more mobility and a lot of technology, and the way that people are just used to working has become more feasible in a remote setting. I mean, we're having this interview right now remotely, whereas 10 years ago, I would've probably had to fly to your office or whatever.
Clint Betts
Yeah. We couldn't have done it in 10 years. It's actually incredible that we can do something like this now. What do you think... How do you build a culture? And maybe talk a little bit about how you built it the first time, how you've thought about building it this time, and how you do it in a mostly remote environment.
Scott Clark
Yeah, so one of the things that I'm a strong proponent of is culture needs to be one of the pillars of the company. I think you have strategy, execution, and culture, but you need to treat it as it appears to be. Where you're going, how you're going to get there, and who you are because the culture ends up permeating all of that. It's how big of a swing you're going to take and how effective you are in your ability to actually get there.
And so I think it's something that needs to be extremely important, but then it also needs to be extremely active. It's who you recruit, who you reward, and who you end up releasing. Almost... To use another analogy, I think of it almost like a tree. It'll grow on its own. Sunlight and rainwater, it'll do whatever, but if you actively tend to it, you can maybe make it grow a little bit faster. If you prune it, you might be able to make it grow a specific way if you're trying to make it fruit or whatever it is. There are specific things that you can do to encourage specific aspects of it, but that needs to be a constant process. And so we thought about this very carefully as we built up the team in SigOpt, trying to make sure that we grew organically and intentionally. And then, like everybody else in the world, I had that whiplash of, "Okay, we're going to go into lockdown for two or three weeks, and we'll see everybody again." Two years later, we sold to Intel. But I think that that strong nucleus allowed us to withstand something like that.
We actually saw an increase in productivity when we went remote because everybody knew the role, the responsibility, how they were going to execute, how they could communicate, how they could disagree and collaborate. And so it was really handy the second time around to start with a somewhat similar nucleus, obviously allowing it to grow so that we didn't have a separate culture for the old SigOpt people or whatever it is, but that it's way easier to grow a tree from a sapling than it is from a seed. And that's really allowed us to do that well, I think.
Clint Betts
What did you think of Paul Graham's essay, the Founder Mode one? Because previously a little bit before you were saying, "Hey, my job is to get out of the way and kind of let that..." And then Paul was and Brian over at Airbnb, it seems like they're saying, "You need to be in the middle of everything if you want." So how do you think about that?
Scott Clark
Yeah. I think Paul is a great way of dialing up the contrast in things, which is great because, obviously, we're still talking about it months later. I think, like all things, like all values, there are no truisms. You need to be able to do one thing or the other, and either one can be successful, but it's just about making sure that you do what you plan to do well instead of trying to cut to the middle and then failing at both. And so I'm a big proponent of a good CEO, someone who isn't required to do everything.
And this might be more controversial, but it's like I remember when I was starting out in my career, I'd think of there's this caricature of the CEOs who's on the golf course or something like that. And everybody else is doing the hard work or something like that. But it's like, in my mind, that's actually the best CEO because the team can execute without them. The team doesn't need them for every single decision. They can be on the golf course, which means they can obviously not be on the golf course and dive in when they absolutely need to, but they're not needed day to day.
The CEOs who work 80, 100 hours a week and burn themselves out not only set bad cultural norms but also make themselves indispensable. And that's actually a bad thing in my mind. If the company can be successful without you, then you've set up a good company, you've set up a self-sustaining system, and that's always what I'm striving to do is how do I make this more and more self-sustaining, which then allows me to push it maybe a little bit further or whatever it is, but I'm not needed for the day-to-day. That's my goal.
Clint Betts
Yeah, I agree with that. I think that's healthier and more sustainable for sure. What do you read? What reading recommendations would you have for us?
Scott Clark
Yeah, I am a huge sci-fi nerd and reader. Right now, so I've started to try to dovetail a nonfiction and a fiction book simultaneously. So I'm reading The Ministry of the Future by Kim Stanley Robinson, which is maybe a little not far enough away. I usually lead and read space operas or stuff about alien encounters, which is a little bit further away. This one's... I think one of the tag lines on the title page is, "This is the best science nonfiction I've ever read," because it's literally happening around it. It's about climate change, so it's a little bit disconcerting in that way. But the thing I love about sci-fi is that it presents these really interesting problems in this kind of thought experiment. And fundamentally, most sci-fi is optimistic at some level of, like, we'll figure it out. It might be hard. It might look bad or a little bit dystopian, but the human spirit and ingenuity will figure it out. And that, to me, is very motivating. So, reading that right now, even if it's a little bit too close to home sometimes.
And then I'm also reading Guns, Germs, and Steel, which obviously came out a long time ago. I recently finished Sapiens, which feels like a continuation of that. And I realized that I hadn't read the original book, so going back. And I always find it really interesting reading nonfiction books, just seeing how many things are... how many mistakes are repeated, how many patterns exist. I'm very much a proponent of the old saying, "If you don't understand history, you're doomed to repeat it."
But I find it very motivating to go back. And before that, the previous nonfiction I read was Ben Franklin's biography. Before that, it was Chickenhawk about a pilot in the Vietnam War and just how a lot of these management issues and organizational issues have existed over and over again. And some of them can be done really well, some really poorly, but a lot of it is stuff that we can continue to learn from. So, I need a little bit of mistakes from the past to avoid them and a little bit of optimism for the future to get up in the morning. And that's how I split my reading.
Clint Betts
What are your tools, apps, and things like that that you could not live without that you use on a daily basis?
Scott Clark
I need to do more of this, to be completely honest. So I [inaudible 00:28:14] I'm not a huge productivity app kind of guy. It's interesting. Martin, our board member, I think recently, did a podcast or wrote an article about this, and you have to be careful not to get caught up in the cult of optimization or productivity because then it can be productivity for productivity's sake.
But I would say, I mean, I use email all the time. I use Slack. I have an executive assistant that I love and makes my day way better. So I am a little bit old school in some of the ways that I do things, but I mean, some things that I didn't use every day five years ago are obviously video conferencing. I use Miro quite a bit for whiteboarding.
I love to think about everything as a system or a diagram. But yeah, I think you need to find that balance. Everything's about tradeoffs. Everything's about finding this kind of Pareto frontier of tradeoffs, of like you need to be productive, but not in the cult of productivity. You need to push forward but not blindly and be constantly making that tradeoff.
Clint Betts
This was a weird year economically, obviously, and I mean it's really impressive that you raised in that environment kind of the uncertainty. Obviously, the election was uncertain, but it was just a weird economy. Some industries are doing really well, and some are basically in a recession. It's just been a fascinating year, and now we are going into '25 with a little bit more certainty, right? How are you looking at 2025 and planning for that? How do you think about it from a macro level?
Scott Clark
Yeah, I think there are multiple macro levels that I think are going to end up being tailwinds here. One is, I mean, some of that uncertainty has collapsed. I mean, there's still quite a bit of uncertainty. But I think I'm hopeful that people will have enough certainty now to start making longer-term investments because they know at least directionally where things might be going. I think when it comes to AI specifically, again, a year or two ago, people were all about, "This is cool. I want to try it. I want to learn more. I wonder what it can do." And I think this next year is about how to extract value.
How do I actually put it to work? Shifting from prototypes to production is going to be a big push. And very intentionally when we started the company, and again, some of this was because we got to see this with extremely elite firms over the last decade was we were trying to anticipate, "Well, what problem do you have as you shift from prototype into production?" And when you're prototyping, you need a lot of tools. To get started, you need playgrounds, debuggers, and intuition. And there are a lot of companies that have raised money, and there are a lot of companies that do that extremely well.
But our goal always was like, "Okay, what's the next step? What's that next bottleneck that everybody's going to run into simultaneously?" And I think that that's where this production, both making that jump to production, like crossing this, what we're coining as the AI confidence gap, to even have enough confidence to push something into production and get it through either your internal compliance processes or even just having enough internal confidence to stick your neck out in an organization, which can be hard to say, "I'm going to sign up for the pager duty for this. I'm going to sign up for what goes wrong if something goes wrong."
But then, once something's in production, fundamentally, one of the biggest problems with software isn't writing software. It's maintaining software. It's keeping systems up. Most software has already been written in a lot of these organizations and for many applications. That doesn't mean making improvements and doing new things isn't good, but it's like that is a huge organizational problem. And that's where you need better confidence, better testing, and a better understanding of these systems as not only you're building them from scratch but making iterative improvements, refactoring, changing out components, making sure it doesn't break underneath you, having it weather the storm of real usage.
And that's where we've been aiming. So one of the things that I'm super excited about is this macro maturity of applications of productionalizing is running directly into the type of problem that we're trying to solve with respect to confidence. Because, in all honesty, confidence matters a little bit less when you're just playing around because you're just playing around. There's no real risk. There's no real downside. There's no real permanence. But once it becomes real, there's risk, there's downside, there's maintenance, there's all of that. The things that require better confidence.
Clint Betts
Yeah. Once it matters. Once somebody relies on it, it starts to get pretty real.
Scott Clark
Yeah.
Clint Betts
Right.
Scott Clark
Once you have to hand it off to someone else, too, most software is not maintained by the original person who wrote it. One way that we've been able to do that with traditional software is through having robust testing because it's like, "The test will let me know if something's broken because I didn't write this.
And then when something does break, the test will let me know whether or not I fixed it." And we just don't have that for AI today. And that's not necessary if you're the only one playing with something in isolation. But that's not how enterprise software works. It grows up eventually. And I think now 2025 is going to be the year of growing up in the enterprise for Gen AI.
Clint Betts
Yeah. Finally, we end every interview with the same question, and that is at CEO.com. We believe the chances one gives are just as important as the chances one takes. When you hear that, who gave you a chance to get you to where you are today?
Scott Clark
Oh my God, so many people. I would say one of the things that I love the most about Silicon Valley is how people are... they realize that they got there. Many people realize that they got there because other people helped them. When we were going through YC, we learned a ton from the partners there.
Even before we got into YC, previous alumni helped practice interview with us and review our application. And we just got a lot of really good advice. And sometimes, it's hard to get feedback where they're like, "Hey, you can't just submit your Ph.D. thesis. You have to have a product. You need to have a go-to-market plan and things like that." But it helped us grow up in a way that we were... I think every founder is naive in some way.
You need to be naive to be able to take that leap, but helping us kind of hone in on what we don't know and where we need to get better is something that is really, really important to us. So, we saw this with our YC partners. We saw this with Martin as we were his first board member... as he was joining Andreessen Horowitz. We all kind of learned and grew together. But yeah, I'd say I am extremely lucky to be here. And a lot of that comes from the people who helped us along the way.
Clint Betts
Scott, thank you so much for coming on, and congratulations on everything you're doing. What you're doing, I think, is fundamental if we're going to continue to make strides in AI. So well done there, and I'm sure we'll have you back on. But thanks so much for coming on.
Scott Clark
Thank you so much for having me. I really appreciate it. (singing)
Edited for readability.