Podcast Appearances
I want to learn from Ben.
How is he constructing prompts?
How is he interacting with these LLMs?
It lets us have access to all sorts of different backends.
We use Cloud Code extensively.
You can get to Opus through a variety of means.
You can go directly to Anthropic.
You can use Bedrock.
We've got both of those configured inside of Tailscale.
There's been times where Anthropic or Bedrock have had issues of being able to just quickly switch over to the other one.
It gives us metrics into token usage across the team, so input, output, cash tokens, reasoning tokens, who's using what, that kind of workload.
It gives our security team just visibility.
Most people don't realize this, every API call is stateless, which basically means that entire context window is getting shipped back and forth across the wire every single time.
We log every single one of those, and then we've got some tech in there so that you can consolidate those into sessions.
So that you can actually, I guess, you can go through all the API calls in a given coding session and sort of get context and understand what's going on with that, which is really helpful for visibility.
It's like, oh, what was Carney working on at 2 a.m., for instance?
And then from, I guess, maybe more of a legal and a compliance standpoint, you could actually start analyzing
you could start pointing your Git histories back to individual coding sessions.
Be like, oh, this code was developed in conjunction with this developer, and here's the proof of why and who contributed to it and how, because I know that's an issue that some legal teams have brought up.
Our security team, we can export the logs.