Sanjay Bhakta
๐ค SpeakerAppearances Over Time
Podcast Appearances
Well, we are actually users of AI.
We are not like some of these LLM companies that are creating and training their own models, although we have trained models in the past, but at a much smaller scale.
So for us, investing in infrastructure and building our data centers for the size of our operation doesn't make sense.
Yeah, for that particular use case, we're actually just leveraging their infrastructure and scale.
We built all the personalization models ourselves and we trained them in-house.
And we've been doing that for the last probably three, four years.
long before LLMs became a thing.
We've had our own data science team and we've been training models.
So we use our own homegrown models for most of the personalization and recommendation work.
Well, actually, for some of the generative use cases that are LLM-based, we are using Amazon Bedrock.
We have a contracts management rights clearance system that we just launched.
We are using Amazon's Bedrock capability for some of our moderation, AI-based moderation of user-generated content
So we're looking more and more now towards using out-of-the-box capabilities that Amazon provides rather than build and train our own models, which we used to do in the past.
I think the need for that is becoming less and less.
That's going well, actually.
We're starting to use ChatGPT quite widely within the enterprise.
Internally, yes.
And for external use case, we've actually launched an AI-based recipe search on Bon Appetit on our website, and also it's going to come out in our app, which allows customers to go in and do natural language search and also be able to modify the recipes according to their taste.
So that's the first use case.
We are looking at others with OpenAI as well.