Modern Software with Mike Verinder

Autonomous SDLC : A Test Product Perspective with ChecksumAI Cofounder Gal Vered

Mike Verinder Season 1 Episode 2

Revolutionizing Testing with AI: Checksum’s Journey

In this episode of Modern Software, Mike Verinder sits down with Gal Vered, co-founder of Checksum, to explore how AI is reshaping the Autonomous Software Development Life Cycle (SDLC). Gal shares his journey, the mission behind Checksum, and how their AI-driven testing solutions are revolutionizing end-to-end automation.

From enterprise AI adoption challenges to the future of manual testing, they dive deep into the evolving role of AI in software quality assurance. The conversation also touches on open-source testing, privacy concerns, and the startup landscape for AI-driven companies.

Key Takeaways:

Checksum automates end-to-end testing using AI
AI-powered solutions are transforming software quality assurance
Enterprise AI adoption presents unique challenges
Manual testing remains crucial, even in an autonomous world
Checksum’s system of small models enhances precision and reliability
Privacy concerns are critical in AI-based testing
Open source testing plays a vital role in software development
Investors seek strong business foundations in AI startups
Israel’s tech ecosystem fosters groundbreaking AI innovation

Sound Bites from the Episode:

🎙️ "We generate end-to-end tests using AI."
🎙️ "We call it continuous quality."
🎙️ "AI will definitely change tech completely."

Join us as we unpack the future of Autonomous SDLC, the role of AI in testing, and what it means for developers, enterprises, and investors navigating this transformative shift.

🚀 Listen now on your favorite podcast platform!

Checkout Checksum in the link below !

Checksum.ai - E2E test automation based on real user behavior

Mike Verinder:

Hey everyone, this is Mike Verinder. With Modern Software, we are continuing our series on Autonomous STLC. Today we are going to dive in a little bit from a test product perspective of Autonomous STLC. So I'm here with Gal. Product perspective of autonomous STLC. So I'm here with Gal. Gal is one of the founders at Checksum AI and he's going to share a little bit of his perspective from a test product perspective on autonomous STLC and where that may be going and what his viewpoints are on that. Hey Gal, how's it going?

Gal Vered:

Good, how are you?

Mike Verinder:

It's good to see you, sir. Thanks for joining us today. Hey, Gal, how's it going Good? How are you? It's good to see you, sir. Thanks for joining us today. Let's start off. Why don't you just tell the audience a little bit about yourself and about Checksum and what you guys do?

Gal Vered:

Yeah, so shortly about myself. I worked in tech in the last 10 years. I worked at big companies like Google, as well as a CTO of small startups until we came to CoFound Checksum, and Checksum, in the short version, is that we generate end-to-end tests using AI. So for those who are not familiar, end-to-end tests are tests, at least from the Checksum perspective tests that literally open up a browser, go to our customer's application and click on buttons to make sure everything is working.

Gal Vered:

End-to-end, Front-end communicates with the backend, database, everything. So that's what Checksum does. In short, Our goal is to completely automate the process. So when you use Checksum, we don't just provide some code that sometimes works, sometimes doesn't work. We detect the different cases, the different test cases that customers need to generate. We generate the tests themselves and we provide tests in open source products, in open source frameworks Playwright, specifically, Playwright and Cypress and we automatically maintain the tests Clip, Playwright and Cypress and we automatically maintain the test. So, essentially, we're trying to provide a full service to our customers, so they don't need to think about end-to-end tests, except actually running them.

Mike Verinder:

Okay, cool. So, if I'm hearing you correctly, it sounds like you have a number of AI solutions on top of a web-based UI automation. Is that correct?

Gal Vered:

Yeah, we have a system of AI models that the sole purpose of the system is to generate and maintain a full end-to-end testing suite with very little input from the customer side at least very little input required. Our customers can provide more input if they want to see specific tests, but within that, when we provide the tests, we provide them with Playwright. It's an open source framework backed by Microsoft, so essentially it's like having an extension to your team. It's like having more engineers on your team that just sit down and write tests all day when you run the test. At the end of the day, it's Playwright, not Checksup.

Mike Verinder:

Yeah, playwright's an amazing tool. It works really well. It's competed a lot lately against Selenium, which is another big part of my audience, but they're doing really well. They've got a great tool and it's a great tool to partner with, to be quite frank.

Gal Vered:

So awesome From the rise. We do find a lot of people want to migrate to Playwright, and using AI to do the migration process is one of the use cases we see from the market.

Mike Verinder:

Does Chexum help that out, or with that migration, or do you have to do a third-party vendor or something to do that?

Gal Vered:

No, we do the migration. So at the end of the day, our model writes code and this means that we can handle migrations and all type of weird cases and not just like the very happy path of we don't have anything, let's go ahead and just generate the test case.

Mike Verinder:

Yeah, one of the biggest issues, I think, with migrating from Selenium and I know that's not the topic of our discussion today, but one of the biggest issues that I've always had has been just the framework right. That I've always had has been just the framework right, because the Selenium framework is built however the guy wants to build it. You know it's not a standard framework. It's however the guy wants to do it If he wants to have, if he's having tables that run Selenium, his Selenium software, if he's having this job or this job coded in whatever to clean up his automation and things like that. That framework has been. It's not the library as much, it's it's that framework aspect, because you know that guy like, how does he clean up his test? Now, right, or how does he? How does he migrate away from that? So I I agree.

Gal Vered:

That's a lot of these things up, and it's also it just it was written in the last year, so it just fits better for modern development.

Mike Verinder:

Yeah, that's always been the big. The hard part about that migration, though, is is that that framework aspect? Yeah, so Well, cool man. Is that that framework aspect? Yeah so well, cool man.

Mike Verinder:

So I, I'm, I'm interested to, to know you. You know checksums a pretty neat, pretty neat product and a pretty neat tool set, and you know, I am interested to know, you know, how do you guys that the ai space is so big and there's so many solutions and so much going on, and solutions that are great for a small startup sometimes aren't aren't so hot for a big enterprise? You know enterprise enterprises are complex, you know. I know enterprises that still have cobalt and mainframes, and you know desktops and web applications and mobile applications and services and just all these, all this hodgepodge of stuff, and so it's interesting when you think of AI in an enterprise setting. You know they probably the cursor is probably applicable, replit is probably applicable in some situations. So it's interesting to me, like how a BI focused product company, you know, really chooses what to focus on.

Mike Verinder:

Giving a solution out right. So a requirements to test cases, that's a great solution that you have, and that's I think I try to think of a user's journey right, and they're, they're creating requirements and and they they're making test cases. And even in a big enterprise, banks have to have test cases, insurance highly regulatory places have to have have to have test cases. So those are still popular. Those are still a thing that you have to have. How do y'all figure out what that roadmap looks like for y'all and what to focus on next? Are you engaging with your clients to do that? Is it a little bit of engaging?

Gal Vered:

No, that's a great question so first of all, when we started, we really started Chexam to solve our own problems. So Checksum was started before JetGPT was launched. So it's not like we've seen AI and what it can do and reverse engineer our way to some problem and solution.

Mike Verinder:

Yeah.

Gal Vered:

We always use transformers, but our first versions weren't even large language models. So we just had seen this problem again and again and again, where your engineering team, at a certain point it becomes big Maybe it's Sirius B, maybe it's Sirius C, maybe it's even Enterprise and it becomes very hard to make changes in your application, in your code base, and know, be 100% sure that you didn't break something else right, because it's all dependencies on dependencies on dependencies, and you change one thing. You don't realize that you cause the ripple effect that, like five features later will just break something that's completely unrelated, at least like in in a direct way. And the way to solve it is and the way it manifests is just, your engineer spends 50% of their time on firefighting, so you plan a sprint day after there's like breakage in production or in staging and everyone needs to do what they're working on and just fix those bugs. And the way to solve it is typically end-to-end tests, because when you have at least a very straightforward like high impact testing suite that gives you immediate feedback on what's working and what's not, you're able to detect all of the issues before you even make the PR or, right after you make the PR, just solve them on the spot, finish the task and move on. So, instead of doing this dance of you finish with the task, you move on to something else. You realize you created the bug, you go back to the task. You move on to the next thing. You realize you created another bug, you go back to the task. This back and forth between oh, going back, fixing, working on the new thing, going back, fixing, working on the new thing. Back and forth between QA and engineers, et cetera, et cetera. You just have all of the feedback. You fix it, you move on.

Gal Vered:

So it came from a really really, really really world problem. But obviously the problem with end-to-end tests it takes a very long time to write and maintain. It's not just writing them, like the maintenance, it's like any software project, it's a project. It's a project that you need to maintain forever. And so we were very focused from day one on how do we solve this very specific problem and luckily, with end to end test, because it depends on your font and your backend, your database, like it depends on a lot of stuff, not just on your code base, and we haven't seen, and I don't believe we'll see in the future, off the shelf solutions, which are amazing. We use cursor and repair. It is great but we haven't seen off-the-shelf solutions able to actually solve end-to-end testing problem. I think you need a dedicated end-to-end test solution.

Mike Verinder:

Yeah, so when you talk about that, so are you focusing on test data generation through AI and maybe Go ahead, I'm sorry, it's not like.

Gal Vered:

the main focus is to actually write the Playwright scripts as well as the data setup, data cleanup as part of the tests. So to an extent we do need to set up the data, but obviously so. We work both with startups, which is more straightforward, and larger enterprises, which typically have their own test harness and ability to spin up C data in a testing environment so we can hook up. The advantage of us using Playwright and our model generating code is that we can hook up to existing infrastructure and basically leverage that in the test case detection generation and then maintenance.

Mike Verinder:

So what's your opinion of an autonomous world Like I have this beautiful vision. Someday and I can't tell you when it's magically going to happen, but someday we would have an autonomous SDLC. You might have a product manager that's sort of tweaking the product a little bit and then they may have like a BA that sort of defines what they want and then sort of validates what the product manager's vision is implemented. Right. But that's about. I have this hope that someday we could get to that state and I see it happening. I mean, it's already happened in web to some degree. To some degree.

Mike Verinder:

You could go to Wix, wixcom today, stand up a beautiful website, integrate it with the e-com and all that kind of stuff. One person can do that right Back in what, 2000,? That was a whole team that it took to do that. So I understand enterprise is complex and our IT departments are complex. But what's your view of an autonomous SDLC? Do you think we'll ever get to a point where we'll be at that state, or do you think it's a mixed bag for the next 20 years, or what do you think?

Gal Vered:

I think it's very similar to autonomous driving. Maybe that's a good analogy where getting to a point where you can run experiments with cars driving completely autonomous happened very fast. But actually getting a fully autonomous car is a very, very long, long tail of problem solving. Car is a very, very long, long tail of like problem solving. And we've been hearing about autonomous cars coming in the market for the past 20 years. Right, every year they say next year we're going to have autonomous cars, and so I think it's the same way where we're at the point right now where 80% of the road took 20% of the the time, but now completing it to fully autonomous will take a very, very long time. But, with that being said, there's still a lot of value to create. So even if you continue with with this autonomous car analogy, like tesla works great, right, like you have, you still need your hands on the wheel, but it it's able to navigate, it reduces stress, like it's a very cool product. So I think in that case, like great, now you have tools like Cursor or Repelit, where you can actually okay, maybe it doesn't fully build the product, but it makes a software engineer 5x more efficient and Checksum is maybe more like Waymo in this case. So Waymo takes you from A to Z completely autonomous. You just click on a button, you get the car, but it doesn't do everything right. So that's why Checksum is very focused on end-to-end tests and we are thinking about ourselves and I think many companies will start doing that as a result as a service company Like we're going to deliver to you an end-to-end testing suite. That's our goal.

Gal Vered:

Checksum is not as much as a product you use daily, but more of an AI agent that just delivers code for you. And when we deliver code, we deliver it on GitHub or GitLab via pull request or we're very hooked into, like your existing tools, and so you can barely interface with like a Checksum UI. It's mainly Checksum does stuff as if you had another developer on the team, and so I think checksum, in this sense, is very similar to Waymo, and I think software engineering and generally processes businesses, processes with AI will take the same route. You'll see very wide tools that require people, but it still makes you 3x more efficient and very narrow tools that automate the entire thing reacts more efficient and very narrow tools that automate the entire thing, but it's like a very narrow and focused use case like checksum, end-to-end tests. We only do web and we mainly focus on playwrights, and that's it right. We're very focused.

Mike Verinder:

Yeah, what do you think we're missing from an AI perspective in the end-to-end space? Do we have the CICD down? Well, do we have mobile down, where we could be pretty seamless with mobile? I mean, mobile was kind of an issue for a long time, but what do you think our weak spots are?

Gal Vered:

Are you asking from testing or generally?

Mike Verinder:

Testing in format with those aspects added right. So testing with CICD, testing with mobile right.

Gal Vered:

Yeah, so again, chexam solves pretty automatically the web component of it. Yeah, so we kind of build our own models, we train on data, so we have a setup that we need to do per customer in order to make sure the accuracy is correct before we can unleash the model. But that's all done on checksum side. But web is pretty automatic. Today with checksum Our customers spend maybe an hour a week and get a full test and suite and this hour is mainly review the results that the test we've generated and you know when a test fails, figure out why it failed and actually fix the bug. So maybe an hour a week, so that's pretty automatic. I don't think the CICD, it's just a way of running the test. I don't think that mobile has caught up completely, but that's mobile is a bit harder because of the devices and the different operating systems.

Mike Verinder:

Yeah, do you see a future for manual testing in that environment?

Gal Vered:

Yeah, I actually see a future for manual testing in a sense, more than automated testing, because I think what could be automated will be automated by AI, than automated testing, because I think what could be automated will be automated by AI. And you'll always need Q automation engineers, but you'll just need one Q automation engineer will be able to serve a bigger slice of the pie. And again, with Chexam, a lot of companies we work with engineers directly because if it's an hour a week then you might as well just have engineers do it. So it really depends. But manual testing requires manual testing, not like regression, like let's do a checklist, but actually understanding the different customers and the different edge cases. And there are apps with thousands of configurations so you can't automate everything.

Gal Vered:

So, understanding what is the highest risk for this release and having an automated testing suite that gives you peace of mind, make sure the core functionality is working, so you know 95% of your users or 99% of your users will not break. But automating the 1% of users that have those weird configs is going to be very hard because, again, you're going to need hundreds of thousands of tests. And this is where manual is going to be very hard Because, again, you're going to need hundreds of thousands of tests and this is where manual actually is going to be important, because or still is important in the same way. It is important today because you, okay, we release this feature and that feature and I know this customer has some weird issues with this feature and they use it in a weird way. So let me just do a manual regression suite for this new feature we released in this weird way, which we're not going to automate, because it's like one customer out of a thousand and you know it's only when we touch this feature.

Gal Vered:

So I think manual has still similar in a similar way. It still has a place today. I don't think I will replace it because it requires a lot of context and a lot of like. It requires breathing, your customers and your engineering team. But I think QR automation it's easier to see how it's going to be, or Checksum is, to an extent, already automated, but it's easier to see how, if you just write code, machines can do it.

Mike Verinder:

I got you. So you mentioned Checksum really kind of started. I maybe you got the idea for checksum. Uh, when llm's chat gpt came out, uh, you know, do do you integrate just with? Do you use chat gpt on the back end or do you integrate with anthropic, or is that user defined or what's that look like?

Gal Vered:

yeah. So we started checksum before those things were launched and we actually had models which we trained. By the way, and till today, checksum is a hybrid between models we deploy and train and use to some tasks that off-the-shelf APIs will work well. So we always try to pick and choose the best model, because end-to-end tests need to be very fast, need to be very reliable and they need to know the specific application, so like they're very specific. So this is why we need to use iRead. Whatever we can upload to OpenAI and Anthropic and Google, we do, but it's not the bulk majority of the AI usage.

Mike Verinder:

I was also wondering what you think the future. Do you think there's a future for open source testing? Why I say that is because I see the tools just getting better and better. Why I say that is because I see the tools just getting better and better Playwrights and there's a number of test product companies out there built on Playwright. And then there's Selenium is still great, but it seems like the products are getting really, really strong and they're built on open source. Do you see a big future for open source? Do you see that still living?

Gal Vered:

at the same time yeah, it's a good question, Like, obviously, open source will always be, to an extent, the infrastructure of like everything we do in technology, right? Right, Our servers will still run Linux and it's going to continue to be maintained, and I'm sure the next Linux or the next infrastructure open source project will continue to exist. That maybe hasn't been invented yet. And then people still use Docker to containerize applications and again, the next thing will be invented and will be used, and Playwright is one example. But I think, especially with AI, the app layer is extremely important, Like productizing those models and allowing productizing those models in a way that allows customers to actually complete full workflow and gain full advantage of those models beyond just hey, give me some piece of code here, piece of code there, and save you a few minutes.

Gal Vered:

I think the app layer and the connections between everything is going to be super important and again, to an extent, with checks and where we focus on end-to-end tests, we can see how you can kind of drive results.

Mike Verinder:

Yeah, the reasons for open source used to be well, test products are really expensive. That's not really the case anymore. It also used to be. Our integrations are pretty in-depth and I can never find a test product that can facilitate our integrations, and the integrations these days are pretty commonplace. It's not like it was six years ago or seven years ago even, and those were big reasons. And, yes, open source started out with the free. We want it because it's free, but we're not talking about. Every tool these days is $25,000 a license, right. So tools are more competitively priced these days, and open source doesn't inherently give you things like AI solutions that just have a huge return on investment, right. So when you compare that plus the infrastructure and the support structure that you get, I don't know man, I get a little nervous about the open source future.

Gal Vered:

I understand your question better now. No, I think open source will continue to be the main channel Specifically for testing. Obviously, at Checksum, we took a bet on open source right, so we very deliberately provide code and provide playwright tests versus provide a platform and tests that are like, displayed in no code fashion, to actually run it. So, very deliberately, we provided no open source because, from our experience on talking to prospects at the beginning and working with our customers, we see a lot of importance of using open source specifically for testing.

Gal Vered:

A testing suite is as strong as the engineers on the team connected to the results and if your test sits within your code base and are run within your GitHub actions and you can see exactly the code that's being run and you can quickly make changes with code with the tools you already know.

Gal Vered:

That's the most important thing. If we have some platforms that none of the engineers ever go to and they need permissions and they don't even have permissions and they don't know how to use it and they don't know how to review the reports, that's where we lost them. So, specifically for testing, I think open source is extremely important and again, we made a very deliberate decision. At the same time. I think the tools that are winning in the market today are the tools that allow you to write code and actually make things happen, and I don't see a shift towards no code generally in software development because of AI. So I think the trend will be tools that allow you to leverage open source frameworks and tools, like we've been doing today, in order to get what you want to get, yeah, okay.

Mike Verinder:

So let's go back to Checksum a little bit. You know and how you're integrating AI and you know how are you, how did you make that product and come up with that product and how do you deal with things like hallucinations and and aspects of that?

Gal Vered:

Yeah, that's a great question. So we at Chexam, what we found most useful is, instead of creating one large model that tries to do everything, and we think about checksum as a system of small models. Each has a very specific task and each solves a very specific problem. So we have a model with the goal of summarizing the HTML and basically providing an HTML summary, because HTMLs today are huge and you can't fit all of them. Even if you can, it creates issues when you fit the entire HTML.

Gal Vered:

We we have a model with we, whose goal is to decide what next action to take. We have a model whose goal is to generate the best locator. We have a model or selector, and we have a model whose goal is to generate the best assertions. So, basically, we just have a series of very small models with very specific tasks. Some of them are internally trained, some of them use external APIs, but this allows us to keep the and to keep in hallucinations to a minimum. It allows us to generate tests with a cost efficient manner and it allows us to make fast decisions that are more constructed and also, when something fails, it allows us to kind of break it down. So yeah, take a big problem, break it into small steps and build a system of small models or small decision-making capabilities versus dumping everything into an LLM and hope you get the correct answer.

Mike Verinder:

So I've got another one for you, and it's something that I've seen a lot of is SaaS companies that are giving AI solutions and things that require LLM integration. How do SaaS companies deal with that in a regulated environment where privacy is like a big issue?

Gal Vered:

That's a wonderful question. It's hard. It's privacy and also companies, even if the data is not consumer. Private companies today are really worried that large language models will be trained on their data, even if it's not consumer data, just because they feel like they're missing something out. Whether this feeling is correct or not, and whether you should care if someone trends on your data, I don't know. But yeah, it's a big problem With Checksum.

Gal Vered:

Luckily, we've architected our entire system to not collect PII, so we don't collect any personal identifier information, and the data that Checksum is exposed to is mainly how your front-end application looks like, which every customer of our customers have access to, the web app that you build front-ends for people to use. So we don't need access to the specific code base. We don't need access to the backend systems, to the database. We don't need access to PII or specific consumer data, so we architected it in this way. But as we do this, we see that companies really care about it and I think we'll need to.

Gal Vered:

You know there will need to create some standards as to what can you share, what can you share, can't you share, and companies need to make a decision about how much they're willing to give in order to gain AI efficiencies, because Checksum was able to do it with very little data. But some use cases you can't just do. If you're not willing to give data to an AI model. And I'm assuming, as AI models become better and you can 5X your revenue if you just use AI, I think I will see companies more willing to put data inside of AI. So it's more of a reward structure function.

Mike Verinder:

So do you help them sanitize that data or do you make them aware that that data needs to be sanitized, like if it's a social security number or a phone number or anything really? Last name addresses and stuff like that.

Gal Vered:

So we don't work on real customer data, right? We generally test in a UAT environment. We log in as fake users, right? So we try to never touch customer data and we have sanitization mechanisms in place we can activate in case there are environments that are more hybrid and it's hard to. Typically we just do the clear separation, but if we can, we have checks and balances in place that make sure there isn't any social security numbers or emails being sent or any PII.

Mike Verinder:

Where do you see checksum in the next 24 months? Smaller clients enterprise clients.

Gal Vered:

If I tell you I know where Checksum will be in 24 months, I'll probably be lying. We're hyper-focused on we have a product that works really well. We go very fast, so we're hyper-focused on solving the correct problem at the end. What we do know is where do we want to be in 10 years and where do we want to be in the next month and everything in between, or in the next three months and everything in between? We kind of figure it out as we go.

Gal Vered:

But, generally speaking, as we move towards more autonomous systems and specifically for engineering, more autonomous AI engineers, we believe that there is no autonomous AI engineer without autonomous QA automation engineer, because if AI is going to write more code, this code is going to be lower quality and more code will be written. So you, if and if you want AI to fully autonomously write code and more code will be written, so you and if you want AI to fully autonomously write code, you need a way to test this code very robustly and you need a way to test this code across systems. Because AI is very good at figuring unit level code. It's still not good at like understanding a full code base all at once, exactly how it works.

Gal Vered:

Yeah, so our vision in the next I don't know know if two years, but in the next five or ten years is to play a significant role in building autonomous ai engineering systems by providing the testing part and the feedback part that will be then fed back to the ai model in an iteration loop. So I'm imagining an AI engineer model will write some code, checksum will test it, provide all of the failures to the AI model, the AI model reflects and so on, and in a sense, that's what we're doing today. Just behind the scenes. It's not an AI engineer, it's a person who's maybe using AI to write code, but the immediate feedback of everything that's working is like the most powerful thing about checksum and end-to-end testing suite in general, and I think this feedback will be necessary to push the envelope. So that's the role we want to play in the overall grand scheme of things.

Mike Verinder:

You mentioned reporting a little bit or giving feedback, which I consider reporting. I can tell you, as an engineering manager for 20 years, this is going to sound kind of weird, but one of the most important things to me was to have like a dashboard or a viewpoint into how well my release was doing and the health of my release when I got to you know, two days before release, I need to really be able to manage a lot of things Other people's expectations right, everybody in the business and our clients to manage a lot of things, other people's expectations right, everybody in the business and our clients and just a lot going on. Does Checksum have that ability to be able to pull all of that together and give kind of on the execution side, for them to be able to say, hey, we're ready to go or no go, kind of a status?

Gal Vered:

Yes, and today we focus very much on the engineering standpoint, like is your app functionally correct? But on our roadmap is definitely bringing on different stakeholders and different data point to be a central decision-making center for whether you can deploy a release, and testing and whether your app is working is the main part of it. But we can definitely do more in it because we're already somewhat connected to the new features, everything that changed in the release, right, because we're running the test, so we can start publishing changes and informing customers or informing product managers or informing the company. So we do have a lot of data that we understand of what changed in the release and what happened and we can definitely operationalize it. But you know, as a relatively young startup, we're very focused on end-to-end tests and we also see a strong pull from the market. So we don't see any reason to diverge too much at this point because we feel like there's still a lot we can do to, you know, satisfy the core use case of.

Mike Verinder:

I want to know that my app is working and if it's not working, I want to get it fast. Yeah, yeah, it's. It's one of those cool things that happens when you become an end-to-end automation product company is that you start collecting a lot of data and you start realizing man, look at all the insights that I could give if I repackaged a lot of this data, like you could make improvements to your test processes if you tweak this or you tweak that. Right, you found five bugs last time, but you found it in this section of code, so this section of code is higher risk than the other areas or whatever. That's always been kind of a thing, right, as a manager, I would always go back and do kind of that retrospective and how do we get better? I could see as a product maker, I would love to be able to have that kind of data and to be able to push that back into an informed decision later 100%.

Gal Vered:

We call it internally a continuous quality. So there's continuous integration, continuous deployment and the checks that we want to play a part in continuous quality Basically, at every point, give you the insights and give you the tools and the information you need in order to increase the quality of your product. Again, end-to-end test is just the start, where, like, are things working? But if there are bugs, we can then tell you in the future why do bugs happen? Which modules do bug happen the most? Maybe you can refactor these modules, maybe you can, you know, do something about it so it's bugs don't happen in the first place. So, yeah, I think there's a lot of opportunity here and we kind of and try to, you know, look far but also focus on on the near so I've got one other, I guess, area that's kind of really been on my mind a lot lately is uh, and it's it's not really.

Mike Verinder:

I guess it is product specific in a way, but it goes back to funding. When you're a startup and you're going through the funding process and you're looking for investors, what are you seeing that they are looking at or that they are interested in these days? Is it just anything that you're throwing at them with AI in it, or is it the specific aspects of that? What do you think is do you think their VCs and the private equity companies out there are really interested in these days? And really sort of turns them on?

Gal Vered:

Yeah and I think first of all, no, it's not everything you throw AI. I think people tend to give not enough credit to investors that are, of course, they're smart people and they want to also make sure there's actually a strong business behind it. But AI did change a lot of things and I do see investors and maybe even more generally people take things into different ways. Right, there are investors who think that building software and building UI elements will be commoditized, so advantages you used to have if you built a CRM right, you could have advantage and great business If you built in CRM.

Gal Vered:

I see a lot of investors say that now it's very easy to copy features. So you know application and UI and user experience is no longer an advantage. I don't know if I subscribe to it, but I do hear people say that and they only focus on, like, deep tech that cannot be copied quickly. And some even go further where they think every company in the future will build their own CRM according to exactly their specifications because it's going to be so easy. And the other investors is just focused very much on speed, which same kind of insight but different outcome is like yeah, so now if you just move faster and you understand your customers best, you win because copy software is easy. So actually they focus on like the best team and team that moves faster and have the most insights, and so, yeah, I don't think it's as uniform today because of the latest changes and it will probably converge at a certain point to some thesis that controls the market.

Mike Verinder:

Do you think liquidity is opening up again in the market, or is that still pretty tight?

Gal Vered:

Yeah, I don't think I'm the right person to ask you know, we know about that, we're doing well, or is that still pretty tight? Yeah, I don't think I'm the right person to ask you know, we know about that, we're doing well. We haven't, like you know, so I have a sample size of one, so I don't know if I'm the right person to answer this question. Probably you need, like the investor side, to tell you what they're seeing, if they're seeing a higher volume.

Mike Verinder:

I mean, I talk to a lot of boards. I've talked to probably about six different test product boards, I don't know. I've seen it a little bit all over the place. To be honest, the money's there, it's just a little tighter. One thing I have seen is they ask. I think it's interesting how much they ask. They ask a lot of questions a lot of times because they aren't the expert in the tech. You know so interesting stuff, gal. I was just wondering what your insight was.

Gal Vered:

Yeah, no, I agree it's an interesting. It's an interesting period. I think tech is changing very, very fast and AI will definitely. Tech will be the first thing that is going to change completely because of AI, and I think other industries will also have to.

Mike Verinder:

Can we expect to see checksum at any conferences, any test conferences or anything like that this year?

Gal Vered:

Yeah, definitely, we are moving very fast. So I can tell you, we have the calendar for 2025 and here's all of the things we're going to attend. But on a month to month basis is definitely on our roadmap. We want to engage more with the community at scale, and conferences is a great way to do it.

Mike Verinder:

Yeah, is that your approach? More meetups, more local events or more conferences, or a little bit mix of both?

Gal Vered:

It's a mix. We also see now we've made enough names. I don't know if 90% of the people know us, but we know enough names for ourselves. We made a good enough name for ourselves that we get a good stream of inbounds. People are just hearing word to mouth or kind of becomes familiar with Che checksum and they find us. But obviously again, we want to engage with the community in a broader scale, so conferences are a good way to go about it.

Mike Verinder:

To get your name out a little bit right. I understand that. So you know. I know who you are and you're an innovative test product company. You're also from Israel. I think that's awesome. I'm so impressed with Israeli companies. One of my first jobs was doing development work and testing work with a company called Amdocs, which is an Israeli telecom billing company, and it's just so amazing to see such awesome stuff come out of such a small country. You know that you wouldn't, you would never think that just the innovation that's able to come out of Israel is just unbelievable 100% agree and I think there's a few reasons to it, but yeah.

Mike Verinder:

What are your reasons for it? Why are y'all so? It's like a little San Francisco over there, or something and our engineering team is in Israel.

Gal Vered:

So we have a team in California, our engineering team is in Israel and overall we're a US company with a big Israeli presence. And obviously I sit in California but I'm from Israel. Why? I think Israel is a relatively new country. It was established on similar values to the US from the sense of like pioneering right, and this was very recently.

Gal Vered:

So I think the entrepreneurial culture is very prominent in Israel because it's kind of used to, like 70 years ago when Israel was founded, people just do stuff to make things happen and that's very aligned with startups. And also, again, the culture I know in a lot of cultures when I used to be at google, I left to co-fund the startup. When you do that in other cultures, like your friends is like why, right? Like why would you think, hey, you would a perfectly good job making good money, and like, cut your salary in more than half and to start a business, that's that's unstable. And I think, like in in israel culture or at least my friends like yeah, great, like that's very ambitious, that's that's unstable. And I think, like in in israel culture, at least my friends like yeah, great, like that's very ambitious, that's amazing. So I think those things kind of like get people to try more yeah, well, it's like I said.

Mike Verinder:

It's amazing to see you give endless examples of of software companies that have done really well, so, uh, so. So congratulations on all you've done so far. I'm refreshing my website to see what you come out with and what you continue to come up with every day. So thanks, gal.

Gal Vered:

Thanks and thanks for your time and it was a lovely conversation.

Mike Verinder:

Hey, this is Mike. Thanks for watching part two of our series, autonomous SDLC a test product perspective. If you want to go back and check part one, which was Autonomous SDLC developers perspective, feel free to look that up in our channel. I do expect to do a part three to this sometime within the next month, so welcome to check that out as well. Special thanks to Gal and the checksumai team for taking their time to talk with us. Links to checksum is in the description below. They're a great product and it's a great team over there. It's a lot of energy. It's a good company. All right, thanks guys.