D2 Portfolio Demo - Augment Code
SUMMARY
In this demo, we explore Augment Code’s innovative use of AI to enhance the development process within popular IDEs like VS Code and JetBrains. By enabling seamless pair programming, Augment AI provides developers with a deeper understanding of their codebase, highlighting details and snippets that other AI tools often miss. Its ability to leverage contextual documentation and historical changes gives developers a significant edge in navigating and optimizing their code.
Augment’s advanced code completion capabilities offer a smarter, more intuitive coding experience by predicting and suggesting relevant code based on the entire project context. This session demonstrates how Augment is pushing the boundaries of AI-assisted development, making it easier for developers to write cleaner, more efficient code.
TRANSCRIPT
Jerry Li 03:20
All right, I think we can get started. So my name is Jerry, I'm the founder of elc and also we just started this computer program, literally talk to all of you one on one and before some of this group. So super excited to kick this off. This is our first event of the year and so this is as part of our D2 program and for the Augment team. So if you don't know D2, it's a new program that ELC started about six months ago and it's a program intended to bring together startups in devtool space and also engineer leaders who have, you know, very passionate about developer tools. They want to discover new tools and, and want to contribute and be part of that, both in terms of providing insights and part of it in discussion and maybe investment, maybe become customer or advisory. So there's a lot can happen. We have a lot of momentum so far and we are about to launch a community for just for D2. And this is the first event of the year to as a first event in the series to introduce new depo startups. We have 45 minutes today and how we can spend time together is we'll have a brief section from everyone, just who you are, what they do, which company working for and and then so that we have some shared contact of who we are in a room and then we'll hand over to Roger to introduce the algorithm team and we'll do maybe 30 minutes of 25, maybe 25 minutes demo and then leave 10 to 15 minutes in the end for Q and A and then after that, after 45 minutes for people still want to stay the two members, then we can extend a meeting for internal discussions, but that's optional. So let's kick it off for a quick intro. Gabe, do you want to get started?
Gabe Westmaas 05:26
Sure, I'm happy to. Hey everyone, I'm Gabe Westmoss. I lead engineering, product and design at a company called anroc and prior to that I was at Chukr for about six years. Sorry, was there more to cover on the intro or is that good? That's good.
Jerry Li 05:42
Perfect. Exactly what I hoped. I'll just pick names then. Natalia.
Natalia Pyalling 05:50
Hi, my name is Natalia and I'm leading the DevOps engineering inside of the intermediate cloud communication and have experience. Expertise in different areas including sre, including AI and data engineering and development itself. So I'm kind of about 20 plus years in the area. So this quite interesting to look on the new adventures, how it goes.
Jerry Li 06:19
Great. Dan.
Jerry Li 06:25
Hey everyone.
Dan 06:26
Dan on a career break, exploring a few things. Previously director of engineering at LinkedIn for about eight years.
Jerry Li 06:36
Great, thanks. Kristian.
Kristian 06:44
Yeah, I'm sorry, I'm commuting. I'm on the ferry in the East River, New York right now. But Christian, director of engineering at Spotify. Great to meet you.
Jerry Li 06:54
Awesome. Roman.
Roman Kleiner 06:59
Hello. Roman Kleiner, VP of Engineering at a company called Ignite. Content management provider, responsible for everything to do with tooling, DevOps, SRE, core components, et cetera, et cetera. Great, thanks.
Jerry Li 07:14
Sandhya.
Sandhya Jaideep 07:16
Hi, my name is Sandhya Jaideep. I used to be a VP of Engineering at a company called Tradeshift. Currently I'm building my own startup in the event management space.
Jerry Li 07:28
Kent.
Kent 07:30
Yeah, I'm Kent. I'm from Yelp. Spent the last 10 years in developer productivity building out that group. I'm happy to be here.
Jerry Li 07:40
And next we go, Tencentin
Jerry Li 07:48
Constantin. I might mispronounce your name. That's good.
Konstantin Novoselov 07:53
Konstantin. I was software architect at York for nine years or so. Before that Amazon Startups. What experience.
Jerry Li 08:02
Great to see you all. Great, thanks. Tara, you can go. Next.
Tara Hernandez 08:09
Hi, I'm Tara Hernandez. I am currently vice President of developer productivity at MongoDB.
Jerry Li 08:15
Awesome. Raj.
Raj Nagarajan 08:20
Hey. I've been doing developer activity for almost last 15 years at eBay and Amazon. Right now I'm on a career break exploring my next adventure.
Jerry Li 08:35
Dara.
Jerry Li 08:40
Hello everyone.
Dhara Patel 08:41
I'm Dhara and I'm currently the SPP of Engineering at Kineso. And I lead few product engineering teams as well as platform innovation and test engineering.
Jerry Li 08:54
Thanks Corey, who just joined. We're in introduction now. So just who you are, what do you do? And that's it. Hi, everybody.
Corey Coto 09:03
Corey Cotto, based in Seattle. I'm the CEO of Kaizen Insights.
Jerry Li 09:08
Awesome. Next. Mark, do you want to introduce yourself a little bit? You can stay off camera. I know you're. Mark is in Got a code.
Mark Dhillon 09:18
So sorry, I'm. I'm dying right now of a cold, so I'll just skip myself. But, yeah, everyone here is much smarter than me. That's all to say that.
Jerry Li 09:28
All right, Roger, back to you.
Roger Luo 09:31
Okay, yeah. Thanks, everyone. For the members who the first time we met, this is Roger. So, like, I work together with Jerry and Marco on helping building the D2 community. So it's our great pleasure to have Augmenting here, John and Matt, to tell us more about, like, Augment's platform. So, like, we got. Jared and I got the opportunity to meet Augment team and Scott last August when we were hosting the previous ERC annual. So, like, yeah, we were immediately impressed by the caliber of the team. This is the best enterprise AI team we have ever met. And they are solving very important problems like all of us deeply care about. So although AI coding tools are aware, but they really fell short when scaling to a large engineering team. So Augment code, they build the first developer AI purpose, the AI purpose build platform for the teams, and they develop really the best technology to have the full context of the code base. And as a D2 got an opportunity to participate in their series being in Lux October when we were still a very small community. And I'm so excited to see the community grow now. And I will give the mic to John, Matt to introduce themselves and tell you more about Augment.
Erlang 11:14
Wonderful, Roger. You did a brilliant job of teeing me up and teaming me up pretty well. My name is John Engler. I'm part of the sales leadership team here at Augment. I've been part of the Augment team since day one. My former role, I was an operating partner at Sutter Hill Ventures and had the opportunity to meet the founding team like Day One, and took keen interest in the very promising pursuits of using AI to generate code and the quality of the team that was being built around this idea and eventually migrated out of Center Hill Ventures and into Cut Augment full time just last October. So it's been a wild ride. I'll let Matt kind of introduce himself and then we'll kind of. Give you a Reader's Digest version of where we're at and the problems we're looking to solve for our customers. Thanks, Matt. Thanks, John. So yeah, I'm Matt Paul. I'm a solutions architect on the team here. I've been at Augment a little over a year now. Joined John's team to help go to market efforts. Prior to Augment, I was a postman for about five years. I also joined there as the first se. So I've been in the developer tool space and going through some, some similar journeys for a little while now. But excited to share more with you today on Augment and what it looks like. Yeah, and my apologies, I don't have a slide for this. There's a little more context setting than maybe I fully realized here. But we are two years old. We're based in Palo Alto. We have nearly 80 employees at this point. D2 participated in our Series B. That Series B took place last fall. We raised $252 million as part of that, that last round. In addition to, to your participation, Center Hill Ventures Index, Lightspeed Ventures and Innovation Endeavors participated in that round. We're in the hundreds of domains that we're supporting in terms of our customer base. We're in just now wrapping up our second quarter of taking sales. So we just started, you know, genuine selling activities August 1st of last year. And it's been an experience of learning, as I'm sure many of you can appreciate, but also very positively surprised with the desire that companies have to solve the problems that Augment is meant to solve. So I'll kind of walk through some of the messaging that we share externally and then, you know, the prize of this presentation obviously is Matt's demo and him putting into practical terms the outcomes that Augment provides for organizations that look very much like yours. I want to make sure we have plenty of time at the end. So I'll be have the goal to kind of wrap the kind of unidirectional talk track here at the bottom of the hour and then we'll answer questions and then if anybody has interest or anything beyond that, I'll, I'll share some contact information in the chat about how you get, get a hold of us. So you know, AI there's a lot of noise around the, the AI for code space. A lot of competitors in this space. Many of these competitors took, you know, what we perceive to be shortcuts and kind of front ending the, the publicly available large language models and, and produced products that were quick to market but low in value and low in consequence. Augment is unique in our ability to deeply understand existing long live code bases. So most engineers that work as part of your organizations, as you well know, don't spend a lot of time on greenfield projects, but spend most of their time iterating within existing code bases, extending existing code bases, writing tests, doing very toilsome tasks, research whatnot. And we want to be the AI that they turn to, to ask the questions they might be embarrassed to ask their colleagues or cannot ask their colleagues because they're remote or otherwise kind of unavailable to them. You know, alleviating them from the need to go to documentation or just understand the code themselves. And we think we're uniquely eligible to pursue that opportunity just because tools in the existing ecosystem have not been able to solve this problem. AIs have been very superficial in terms of their understanding of the very complex large enterprise code bases. And the knowledge about those code bases can certainly be shared more prevalently across entirety of organization. That context that we're uniquely able to capture through understanding your code, understanding documentation, understanding historical changes, we surface to the developers uniformly across many types of modalities. So the same context that we provide in chat drives and informs, the line completion drives and informs agentic use flows or user flows. And by having that common context engine, we're able to more deterministically produce higher quality code code that follows the existing idiosyncrasies of your code base, reducing the need to recursively edit or adjust or kind of override the generic code that AIs are traditionally associated with delivering. And we think that this solves all sorts of use cases, whether answering unknown unknowns or more meaningfully accelerating task development. And Matt will get into the details of that. Speeding through this quality statement is not just in, you know, a very biased person's opinion. This is, this is quantitatively and qualitatively validated through a series of evaluations regularly when compared to the dominant participant in our space, GitHub Copilot, we're able to achieve twice the level of quality that we're able to observe through through evals, you know, against co pilot. And this manifests through our our evals and customer hands where they're putting augment into their own environments through comments like this one from Netflix, right, where just recursively hear about the delight and experience of seeing your own code suggested back to you from from an AI. Our our benefit. And defensible capabilities here are delivered through this incredible team that Roger outlined earlier. We've got 22 AI researchers on our team. These are people that we've recruited out of Meta and Google. These are very, very difficult people to recruit and there's only so many places you can recruit them from. Sutter Hill Ventures was critical in terms of helping us build this team and we've got the funding to go and kind continue to persist and widen the gap technically relative to the other folks in the space. The last little bit security is something that's ever present. This has been a concern as AIs have infiltrated organizations like the ones that you work at currently or previously worked at. So we made from the get go a pledge to build a tool that would be accessible to enterprise engineers. And as part of that, some, some confirmation that we would never ever train on customer data, that we build a system that gave confidence to, to our users and we've manifested, you know, validation of that, that through the audits and whatnot that come with SoC2 and GDPR, you know, requirements and so forth. So we've got a really strong security story that, that aligns with the, the most kind of rigorous security requirements that organizations put on us. So hopefully with that kind of context in mind and I will stop the laborious slideshow and we'll get into the more fun demo part of the discussion. Thanks John. So I'm going to be showing most of the demo in VS code here, but really this allows me to point out that Augment is something this does designed to drop into ides that your engine is already using. So we support VS code Jetbrains bim. We kind of fundamentally believe that it's a lot easier to meet them where they are rather than to create completely new experiences. John really mentioned this idea of code based understanding being the core part of Augment. Before any request gets to any One of our LLMs, those generative models, they first have to pass through our context engine which looks at that broader code base and determines what is the most relevant information that should be shown to any one of those models. And this is kind of a key architectural choice that we've made. And so what that means is every interaction that I have with Augment is always with. The broader code base in mind. So this little chip down here is just representative of the fact of the code base that I happen to have open in my ide. And Augment is reflecting that back to me. This is a little bit meta because I have Augment's own code base open. That's why it says Augment. So I'll drop in here and just ask a couple of questions so we can start to get to grips with this. I'll say I'm new to this code base, explain what it does. So we've indexed the entire code base. In this case, it's a pretty large monorepo up front, so the responses are particularly prompt. And as mentioned, that entire code base is being considered. So we really like to think of Augment as almost a pair programmer that you're interacting with. We believe that every interaction should be with the entire code base in mind. You shouldn't have a choice to sometimes query the workspace or sometimes not. If you're working with someone and you repair programming, you would hope that they would use their full knowledge of the code base rather than just pick and choose parts of what they know. So Augment, you know, scales with the kind of question that you ask. I've asked for a general overview. So I get some high level bullets around specific components and things that exist in this code base. Of course I can decide to follow up, right? So let's say I just ask for more information on this. I'm just having this threaded conversation like we've all become used to with the different AI chats out there. Right. The difference is that increasingly I start to see more low level detail. Augment is clearly pulling things from deeper within the code base. It's showing me snippets from the code base that it's identified as highly relevant. These are all things that we see when we work with other AI for code tools that it really has trouble servicing this level of information. And often responses are quite general and not specific. One example of this is if I ask a question like what groups exist on our API augments able to go away and discover the different APIs that are in the code base. Tell me all the different endpoints that are in each of those APIs, which is a really nice kind of discovery capability. Right. Information discovery through Augment's chat is a big thing, folks new to the organization parts of the code base you're less familiar with. But I can quickly switch out of that mode and say something like, well, help me add a new endpoint to hash one. So just back referencing that first API that Augment mentioned. And again, I'm getting really specific granular advice about how to do this. Right? I'm not getting generic advice about how to add an endpoint to an API. I'm getting how to do this within this code base. So, you know, I need to define the endpoint in this prototype file. There's a Rust implementation I need to create over here. And it's, you know, detailing even some optional stuff around feature flagging that exists in this code base. So you can see that this is just really tailored to the conventions and ways things are done in this code base. Of course, we kind of discussed that day to day tasks for most engineers look like some kind of iterative task in the code base. It's not necessarily doing something as simple as adding a new API endpoint, and it's not something, you know, super greenfield. So what I've got here is a task set within our ticketing system that's asking me to make some kind of modification to the code base. So it looks like we need to add session ID to full exported edit events. This sounds like a class or a data structure to me. And there's some detail, but pretty limited in here about what's happening. But it seems clear that this session ID probably exists in some other places. And so this is about getting the rest of the code base up to speed in some way. So I'm going to take a pretty simple question to Augment and just say, where is the data structure for this? And so again, Augment goes away and immediately locates the right file. And sometimes just natural language descriptions of things are not things that you can use natural or typical search mechanisms for. Right? They're keyword searches. So Augment helps me find this file and this class over here called Edit Datum. Right. And so it could have been difficult for me to locate this just through a traditional search. I'll close the chat at this point. We could use it to help us through the task, but instead I'm going to show you how Augment can actually predict a lot of the tasks for us. So I'm going to drop in here and add some new lines and you'll see that Augment's actually predicting this session ID to be added to this data structure, which at first may seem a little bit unbelievable, but actually you remember the task, it said that this exists for other, other samples of other data structures in the code base. So it's actually Augment realizing the same thing, like, well, you have this field in a lot of other similar data structures. So I predict that that's the most likely thing that you would want to. To include here. So I'll accept that suggestion. That was an example of one of our code completions. And then I'm going to navigate to the unit tests for this file. So I'm going to go to Edittest PY and in here you'll see that on line 102 there's a highlight and some kind of suggestion that's coming from Augment so I can press a keyboard shortcut to view that. And what's being presented to me here is a diff and you can see that it wants to add a change related to session id. So what's happening here is there's a combination of Augment's code base understanding as well as it taking context from my recent changes, my intent, my directionality. And we're pairing those things into an agentic experience called Next Edit, where Augment is predicting where that next code change should exist. It's guiding me to that location and it's showing me that code change and then I can decide whether I want to accept or reject that. So I'll choose to accept this change. Once I accept one change, you can see Augment at the bottom of the file wanting to propose the next thing. So I press the same keyboard shortcut, I navigate to the next diff, I review if I think it's appropriate, I accept it if I like it. And Augment continues through this flow. So once it's happy with changes in this file, it might suggest moving to the next file. Right, so I'll press that keyboard shortcut again. It launched into a new file. So we're really moving around the code base in order to kind of complete all of the updates to the code that I need to do for this task. It's not just Python compiler magic in this case, right. I'm updating some SQL here that exists within a Python file. So this is truly generative the effect that we're having here.
Erlang 29:18
So yeah, we think this is kind of a really exciting way to combine code base understanding and picking up on the intent of engineers as they, as they work through tasks. And as you can imagine, a lot of the, the, you know, a lot of the challenge in working through a task like this is just figuring out which file do I need to go and edit next? What are the knock on effects? Having something that automatically is happy to go in and update my unit test for me solves for a lot of toil. So yeah, I'll pause here and I think hopefully that leaves us some really good time for the questions
Erlang 30:03
I've got one.
Tara Hernandez 30:05
Yeah, one of the things that. I've seen either directly or in blogs and whatnot is acceptance rates for co generation are historically pretty damn poor. I think we're in our evaluations we're in best case it's less than 20%. Worst case it's like single digits if at all. Is that accuracy rate something that you're already tailoring for and. Well actually first of all, is the accuracy rate something that you're currently tracking and is it something that's sort of high investment as part of this initial version?
Erlang 30:44
Yeah, so completion acceptance rate is absolutely something that we track. We actually have a full kind of analytics suite that accompanies our product that's kind of been a missing component for a lot of these tools as you try to quantify the value of them. But completion acceptance rate is a little bit of a, you know, debatable metric in terms of of the value of that there were many multiples in general of of kind of what you you outlined there in your experiences. But you know, you also have to consider kind of the denominator what how many completions are being suggested and augment tends to produce given kind of the infrastructure and kind of the performance of our model produces a lot of completions. And so even though we we produce more completions than most, we have a higher completion acceptance rate of most. But we'd argue that's that's maybe not the most valuable metric to to be attentive to. These tools are, you know, delegated to the individual developers. They have agency whether they want to continue using that tool or not. So unlike other technologies that you may be purchasing for your organization, your developers and the retention of those developers and how the tools adopted across the organization is a really good proxy for how much value these tools deliver for your organizations. We really pride ourselves on the fact that we are able to achieve an average in aggregate across all of our customers higher than 70%, 75% attend retention rate. And this comes in stark contrast with many of GitHub Copilot customers which are typically in the 20 to 30 rate range. And so that in absence of some other tool to measure dev productivity that has yet to be discovered or at least socialized with me, yet seems to serve as a really strong proxy of of utility and quality and value for these organizations.
Tara Hernandez 33:02
Thanks for the information. I will full disclosure, I'm still a gen skeptic in a lot of ways and so. Acceptance rates being low is not a bad thing to me potentially because it means that my developers are actually writing the code and then therefore being accountable to what is being written, which is one of the challenges that I'm trying to reconcile. One of the things that I find interesting, I don't know if it's a true differentiator, but it's certainly more unusual from what I've seen recently, is around that concept of education, being a first class citizen, the developer workflow. So that is I think a really interesting aspect of, of, of the approach here. So thank you for that.
Erlang 33:41
Yeah, I mean, Tara, I've been having conversations with folks like you for, for multiple years about gen and, and developer workflows and it's kind of been interesting to watch the evolution of, of kind of the reaction. You know, initially it was, you know, this is going to take dev's jobs and it's like, no, no, no, this isn't going to take dev jobs, but a developer with an AI may take the job of a developer without. Right? And so, you know, that's, that's kind of the, the prevailing thought and probably kind of how we're orienting ourselves is there will always be a human in the middle. This will always be supervised. And when you commit code, there is a human kind of responsible for, for that code.
Konstantin Novoselov 34:23
But still that.
Tara Hernandez 34:25
I'm sorry, yeah, I'm just going to say I was recently at an event where somebody was like, oh, you can run your organization with two seniors in six spots. And I'm like, y not giving you any of my money. Go for it, Constantine.
Konstantin Novoselov 34:36
Yeah, but still, it's a very low acceptance rate, still indicates that some inefficiency. If developer needs to go through a bunch of recommendations just to reject that, it's definitely not the best use of the developer type. So how do I address this kind of point?
Erlang 34:55
Yeah, I mean, I think you got to look at it in the aggregate, Constantine. You know, there's, there's the realization that, you know, quickly typing through or rejecting 3, 4 completions to just get the perfect completion as the fourth, that maybe saves, you know, a couple minutes of research or saves a couple minutes of kind of just noodling how I'm going to write this function in some way. I think all of us would argue like that that's a good trade off. We'll take that trade off. But at the end of the day, as I mentioned, experientially what we've discovered is if the tool doesn't provide utility, if it's more work or more hassle than what it provides. That user ceases to be a user tomorrow morning when they get back to their desk. And it's not the perfect measure. But user retention and user adoption for that reason is something we pay a lot of attention to.
Konstantin Novoselov 35:57
So essentially it seems to be this is important metric, but not necessarily metric that you need to optimize for.
Erlang 36:03
I believe that's correct. We would love every suggestion to be the perfect suggestion. But just, you know, the nature of LLMs being kind of mathematical probability models suggests that that's like. Is that, is that a realistic expectation even five years from now? I don't know, maybe there's some kind of technologies we could cycle things through and kind of have the AI chew on things a couple different times where it's obfuscated, you know, that the poor completions in some way. But those are for the AI researchers. That's not a sales guy's spot to suggest.
Natalia Pyalling 36:42
I have some questions related to that kind of metrics as well. So you said that like quality of answers of your system is like twice better than copilot. On which metrics you did this kind of, you based this result kind of how you measure that?
Erlang 37:04
Yes. You want to measure CCO eval? Sure, yeah. So there's a, there's an open source benchmark called CCE Eval that is designed to measure code completion capability. So it's specifically targeted for code completions. So the suggestions that you're getting at the cursor, the way that the benchmark works is there's a sample of stuff, several thousand code sites or just locations within code bases where code has been deleted and then the benchmark effectively tests it calls for the AI to make a generation in that spot through its completion engine. And then that's compared with the code that was once there and approved and committed by a human. And so if you, it basically measures the exact match of that.
Natalia Pyalling 38:02
Okay, that's interesting. And I was like on Copilot conference some time ago and we, one of the company, I believe they tracked another thing. It's a PR request number. PR request before they have the copilot and after they have the copilot. That is really interesting. Are you tracking the same kind of metrics inside of your system? System, yeah.
Erlang 38:25
So those, those metrics aren't in our system but we're partnering tightly with dev productivity vendors, dx, Jellyfish, Linear B, who do have access to some of those kind of knock on, you know, results. But you're right to be skeptical, right? Like Microsoft, GitHub, huge organizations, you guys are two years old. This is a strong claim. I think it speaks to the utility and benefit of having that really unique context engine. Um, our secret sauce is our ability to understand what's important from the existing code base and look at examples of code that is already within the code base and that context brought forward in the correct ways using a retriever that understands. Code structure, not just code as flat text enables us to deliver kind of that differentiated outcome, knowing full well that for the most part, I mean, we do some secret sauce on the, the generative models as well, but I mean those are changing all the time and, and you know, for the most part we're all using the same collection of, of models on that side. There's a little more commoditization there.
Natalia Pyalling 39:36
Okay, yeah, thank you.
Erlang 39:38
Yeah.
Roger Luo 39:38
Question.
Kristian 39:42
Yeah, hi, curious bit about your, you mentioned your retention numbers at the sort of individual user level. What's the overall, can you remind what you said there? And also what's the overall pipe looking like and what's the retention at the sort of customer level looking like?
Erlang 40:02
It's a little bit unique in that we have not yet, this is to change in the near future, but we don't really have individual users. We, we made the decision early on not to kind of open up Augment to the hobbyist or the students or the individual developers. We did open a open source offering just very recently, right before the beginning of the year. And soon we'll be making available like a self service offering that shares kind of the same, you know, promises around security as our, as our enterprise customers. But all of the users using Augment today are kind of affiliated with an organization and we've made Augment available through sales efforts to their organization. So our user retention is beyond 75% within organizations that, and this is at the individual user level. So let's say we have, you know, 100 devs from Yelp come and join. You know, we'd hold ourselves accountable for 75% of those engineers to continue to be using Augment. You know, for how long? That's undetermined. Right. Like as time marches on, invariably we will lose people to attrition and other things outside of our control. But so far that's been the experience since we started onboarding customers in a pre beta state over the summer.
Roger Luo 41:29
Yeah. Raj, you have a question in the chat. Would you like to.
Raj Nagarajan 41:33
So the question was, I think I've been playing around with a couple of tools. The one thing that I was just curious, I don't know whether there's a market for it as well, is does your questions cater to the profile of the developer asking the questions? Because I saw the demo about describing the code, repo about there are services, etc. Etc. But it sometimes makes sense if you're a junior developer that type of questions that the answer would be slightly different than if I'm answering the question to a. To a senior developer. So do you have. Is that. Do you think that is a differentiator?
Erlang 42:11
For the Matt's got more hands on keyboard experience, but my initial thought on that is like the more senior developer is probably going to ask more specific question and they'll get an answer more curated and specific to the kind of the, the tone of the answer, the question that they're asking. But I don't know if that comes across in practice with any deterministic reliability. Yeah, I mean, I think you're right. Like the nature of the, the question and how high or low level or specific it is, generally the response kind of comes back at that level, you know. That being said, you know, I think there's, there's still the possibility to actually tell Augment these things. So you can set a guideline that tells it, you know, three, four bullet points or whatever that says like, oh, I'm a junior engineer, I just joined the company two months ago, I'm experienced in this. Please tailor your responses this way. And you can set that in augments config. So you have to tell it every time. I think what's really interesting, like as we're starting to look at our future roadmap, is we're really interested in going beyond code based understanding and bringing in context from elsewhere, you know, documentation, CICD systems and so on. There's absolutely the possibility that you could even plug in like some kind of HR system that like helps Augment understand how long someone's been at the company, what their title is and then Augment can know that automatically. Right. So vaguely horrifying, but okay, not, not a great, not the perfect example I'm giving there, but really the, the point I'm trying to make is that we see that the more relevant context you can provide the models, the better job that they do. And a lot of that could be done programmatically. But Raj, I mean if there's curiosity in the space, anecdotally what we've observed is you're spot on. The junior devs are far more apt to engage with the chat as a primary modality. The senior devs are delighted through the individual line completions. They're not trying to do that same discovery or that kind of context acquisition types of user flows. They're more kind of like rapidly trying to make changes. And they are absolutely shocked and delighted when they see their own code or references to their own code that they may have committed three years ago being put back out to them by an AI program. And it's spot on, it's the right code. So that's kind of the one nuance, kind of between levels or experience levels that we do kind of observe patterns around.
Raj Nagarajan 44:58
Good. Thank you. Thanks, Roman.
Roman Kleiner 45:04
So I'm curious how the feedback loop works. So, for example, the developer sees a suggestion, says, okay, that's not good. Then please look over there. We often have seen that with other tools. So can augment take that and somehow incorporate that in future suggestions?
Erlang 45:23
Certainly if you're, if you're having a chat and you redirect augment, the thread is part of the conversation in the context for the sort of next edit prediction. That agentic workflow I showed rejecting a suggestion helps augment understand it guides it towards the task that you're trying to complete for completions. You know, we generally see that people kind of keep on typing and that's what influences change. So the system is designed to understand like directionality. We don't specifically do any training of our, of our models on any kind of customer data, be that code or actions they take. That's really a key part of our philosophy. So yeah, it's much more real time in terms of how we take that feedback.
Roman Kleiner 46:19
Okay, understood. Yeah, I think that. By the way, that question that Kent asked right now in the chat is something I wanted to ask too, which is what kind of models do you guys have?
Erlang 46:31
Several models. So there's Theo, Raj, there's a handful of models that make up the various user flows. We've kind of curated the model component tree like size and context window to be reflective of kind of the user experience we're hoping to achieve. And even within a given user task like within the chat, you know we can quickly answer brief questions but if it's like something that requires more context then know it's a slower model that can maybe provide a more verbose answer. They are, there are proprietary so there, there, there, there may be a handful that initially started as, as open source, but we've done extensive post training on top of them to get them to behave deterministically and, and kind of in, in how we'd like them to chat primarily is is Claude Sonic 3.5 like it'll tell you that if you kind of be our persistent about asking it to disclose that. So no secrets there. The secret sauce. While the extra work we do to the generative models is interesting, the secret sauce is the retrieval mechanisms. That is really what defines Augment and differentiates it from other folks.
Roman Kleiner 47:57
Okay, thank you.
Roger Luo 47:59
Thanks. Kari, do you have a question? Yeah.
Corey Coto 48:03
Hi guys. Interesting product. I previously ran product and engineering for the big software Engineering Intelligence. Platforms and we were an early adopter of Copilot business and deployed that with the team. But like with any developer tool, you can bring a horse to water but you can't make them drink. So it can be challenging to win over developers to try a new way of working. And so regarding like user experiences, you know, beyond chat and tab, do you offer a different or another user experience for developers to interact with the model?
Erlang 48:49
Yeah, so there's a subtle difference we have between the tabbing of completions and the next edit guidance that can hop you around from location to location. There's also a subtle variation with chat where you can chat in line. It's effectively an instruction and it renders a diff rather than a natural language response. And then we're also plugging into Slack. So if you're in Slack you can mention app augment and you get the same code based understanding and knowledge over there. We've seen that be particularly helpful when there's a conversation about how something works or a pager goes off. You can pull in augment knowledge of the code base that use case around Slack. And slack works with GitHub Copilot Enterprise Cloud. We actually index the code base through GitHub there. But that has proved to be beneficial for non software engineers as well. So product people, sales engineers, people that interact with the product but don't necessarily aren't developers. It can be a great source to address questions that come up about the code base and the product that it runs.
Corey Coto 50:11
Yeah, that Slack integration is interesting. I didn't know about that. From a product marketing perspective, it may be a good idea to lean into that. It's a differentiator compared to cursor. I use cursor nearly every day. The chat composer, the editing line, very similar and so. But I don't believe there's like a Slack interface where I can query, you know, context about my code base.
Erlang 50:41
Right. We hear cursor day in and day out. I mean formidable competitor. They're kind of targeted at a different market, little below market than us, but but have much of the same messaging. We intend to lean into this externalities of both context acquisition and context delivery to kind of further differentiate our capabilities. Right now relative to cursor, we win when the code base is big. So if the code base is 100,000. Files or more. There are. There are technical limitations with cursor that keep it from really being able to grok the entirety of that. And then enterprises that are concerned with using, you know, certain extensions outside of the Microsoft ecosystem, things like Python or C Sharp, technically you're prohibited from using that outside of VS code. So, you know, the average hobbyist or student doesn't care. But other organizations that are bigger and want to adhere to licensing, they do.
Dhara Patel 51:49
So
Dhara Patel 51:54
I think one of the challenges comes with the complex systems is that oftentimes the documentation or the architectural diagrams, things like that, is not up to date. I see that. I think the documentation is one of the input for the next edit. But do you guys also help when you provide suggestions for not just the code changes, but improving documentation or improving certain architectural patterns or where does it end or where does it start? Yeah.
Erlang 52:26
So as of today, any file that's in the code base, be that Python file, some kind of YAML or config file, even natural language in a readme, if you update part of the code base with or without Augment's assistance, it may have a next edit suggestion for you which is updating a related config or is, you know, updating a parameter in another method. And accordingly, it will also want to update the doc string if it exists there. So it's happy to kind of update documentation accordingly. Sometimes you might even just drop into a file and it recognizes that there's a difference between a doc string on the method and the method implementation. And it already has a suggestion for you and you think, hang on a minute, I haven't done anything yet. Why is there a suggestion? And it's just trying to correct this difference that it sees. So yeah, for as long as the files are in the code base, it will try to ensure that it's coherent throughout the different references. One of, you know, we're losing most of the audience here. I know you wanted to have some internal Q A. Roger and Jerry. So my, you know, my email is john@justment code j o n augment code.com matt's is mb. Matt ball. Mbg.com get a hold of Scott. Get a hold of us. I'd love these questions. We'll set up another call, we'll demo your team. We'll do anything we can to, you know, answer, answer the burning questions or make the. Available to you guys. Thank you so much for for the time and sorry we went a little over Jerry and Roger but thank you.
Roger Luo 54:22
No problem. Thanks. We'll share your email with our audience and other member who cannot join. Also the record recording if that's a works for you. And yeah thanks for coming and really excited product and great thanks for your partnership.
Erlang 54:37
Oh thank you for yours. We we're here we're all in it together so thank you.
