Justin Shum
๐ค SpeakerAppearances Over Time
Podcast Appearances
We would transcribe it.
We would understand the context of those conversations.
And then we could have auto-suggested next steps.
Very similar to a chatbot experience now.
I hate saying this and comparing us to this, but when you're chatting with a dumb chatbot, it has the decision trees, the little buttons that you can select for a different outcome.
So we're actually providing... We're still leveraging prompts, but we're offering auto-suggested prompts for a user to simply select next output.
And so they go through this decision tree.
And then we're capturing that in a visual way so that you can arrive to a conclusion or a desired output and then backtrack to see...
you know, what variants you can or changes you can make for a variance on that output.
Unlike, you know, a conversation interface, all your prompts get lost.
For example, if you were to drop in a PDF, let's say a Tesla earnings report, maybe past two years, and you wanted to run some analysis and compare those two years in a certain product segment, you could drop them in.
Well, you're not, though, because you're dropping both in and we know automatically there's two different folders.
We can identify our files.
We can identify the relationships and then have auto suggested products because we know you probably want to do some type of comparison.
And so that's the design.
Those are the design choices we're making.
Yeah, absolutely.
So there are options, various options, which open up different drawers to simplify that process.
It's an auto-suggested prompt that you're selecting.
There's no cognitive load for you to think, how should I phrase this for a desired output?