Jaden Shafer
๐ค SpeakerAppearances Over Time
Podcast Appearances
It's run rate revenue.
According to Woo, code review is going to kind of be kind of be aimed at basically like for the most part, large engineering organizations that are already using cloud code.
Companies like Uber, Salesforce, Accenture, all of those are already using it.
And engineering leads are going to be able to enable the feature for their teams, which basically allows it to automatically analyze every pull request once you turn on and then the system is going to integrate with GitHub.
And it's going to leave comments directly on the code, which is going to point out any issues and basically suggest fixes.
So
you know, like a human developer coming through instead of having to, you know, manually code review all these things themselves.
They're just going to see Claude has come through, skimmed it, written a code review, highlighted any issues, kind of pointed out and given notes and they can go review just those notes or any sort of points of interest or concern that it might have.
So I think unlike a lot of other automated code tools that mostly focus heavily on formatting or style, Anthropic is intentionally designing code review to focus on logical errors, which is interesting.
Wu was commenting on this and said, that's really important.
A lot of developers have seen automated feedback before and they get annoyed when it's not immediately actionable.
We decided to focus purely on logic errors.
So we're catching the highest priority problems.
I think when an AI is going to identify an issue, it basically explains its reasoning step by step.
So it's going to actually outline what it believes the problem is.
And then it's going to say like,
This is why it matters.
This is how it can be fixed.
And by doing this, issues are going to be also labeled in severity.
So there's gonna be it's like, it's basically in a color coordinate it.