Justin Shum
๐ค SpeakerAppearances Over Time
Podcast Appearances
Firstly, you know, it's, you know, tech space, like simple stuff, and then now they're getting to multimodal.
So, you know, video, audio, and then you can kind of like trace where,
what type of different things you're going to be able to do with the LLMs for those different types of formats.
But for us, we looked at the big picture and we realized, yeah, you're right.
We don't know what they're going to release.
It's going to disrupt a lot of startups, but everything is going to be chat-based and conversational-based.
And so what we're actually building is a platform that's really easy for anybody to get into, to start using and finding value with LLMs without having to prompt.
So it comes down to design principles, and we see a promptless future.
You shouldn't have to converse with your machines in order for them to extract value from them.
And I'm taking these learnings from the chatbot era, right?
I built in the chatbot era.
As you know, chatbots went through a huge hype cycle, and then they fell off because they just sucked.
Yes, chatbots are a lot more intelligent now, but you probably have experience with ChatGPT.
You understand that prompting the slightest changes, variations will result in a completely different outcome and output.
And we're trying to eliminate that variance by, again, creating a product that doesn't require prompts.
Yeah, absolutely.
So it really comes down to the metadata that's available to us and the context of the actual data that you're dropping into our system.
So I mentioned earlier that we're dealing with unstructured data at first.
This is in the form of PDFs, video interviews, similar like this.
You'd be able to take this recording, drop it in.