Swale Asif
👤 PersonAppearances Over Time
Podcast Appearances
You want a model to guide you through the thing.
You want a model to guide you through the thing.
I think sometimes. I don't think it's going to be the case that all of programming will be natural language. And the reason for that is, you know, if I'm pair programming with Swala and Swala is at the computer and the keyboard. And sometimes, if I'm driving, I want to say to Swallow, hey, implement this function. And that works.
I think sometimes. I don't think it's going to be the case that all of programming will be natural language. And the reason for that is, you know, if I'm pair programming with Swala and Swala is at the computer and the keyboard. And sometimes, if I'm driving, I want to say to Swallow, hey, implement this function. And that works.
I think sometimes. I don't think it's going to be the case that all of programming will be natural language. And the reason for that is, you know, if I'm pair programming with Swala and Swala is at the computer and the keyboard. And sometimes, if I'm driving, I want to say to Swallow, hey, implement this function. And that works.
And then sometimes it's just so annoying to explain to Swallow what I want him to do. And so I actually take over the keyboard and I show him. I write part of the example. And then... it makes sense. And that's the easiest way to communicate. And so I think that's also the case for AI.
And then sometimes it's just so annoying to explain to Swallow what I want him to do. And so I actually take over the keyboard and I show him. I write part of the example. And then... it makes sense. And that's the easiest way to communicate. And so I think that's also the case for AI.
And then sometimes it's just so annoying to explain to Swallow what I want him to do. And so I actually take over the keyboard and I show him. I write part of the example. And then... it makes sense. And that's the easiest way to communicate. And so I think that's also the case for AI.
Sometimes the easiest way to communicate with the AI will be to show an example, and then it goes and does the thing everywhere else. Or sometimes if you're making a website, for example, the easiest way to show to the AI what you want is not to tell it what to do, but drag things around or draw things. And Yeah.
Sometimes the easiest way to communicate with the AI will be to show an example, and then it goes and does the thing everywhere else. Or sometimes if you're making a website, for example, the easiest way to show to the AI what you want is not to tell it what to do, but drag things around or draw things. And Yeah.
Sometimes the easiest way to communicate with the AI will be to show an example, and then it goes and does the thing everywhere else. Or sometimes if you're making a website, for example, the easiest way to show to the AI what you want is not to tell it what to do, but drag things around or draw things. And Yeah.
And like maybe eventually we will get to like brain machine interfaces or whatever and kind of like understand what you're thinking. And so I think natural language will have a place. I think it will not definitely not be the way most people program most of the time.
And like maybe eventually we will get to like brain machine interfaces or whatever and kind of like understand what you're thinking. And so I think natural language will have a place. I think it will not definitely not be the way most people program most of the time.
And like maybe eventually we will get to like brain machine interfaces or whatever and kind of like understand what you're thinking. And so I think natural language will have a place. I think it will not definitely not be the way most people program most of the time.
Yeah, I think it depends on which model you're using. And all of them are slightly different and they respond differently to different prompts. But I think the original GPT-4 and the original sort of pre-double models last year, they were quite sensitive to the prompts. And they also had a very small context window.
Yeah, I think it depends on which model you're using. And all of them are slightly different and they respond differently to different prompts. But I think the original GPT-4 and the original sort of pre-double models last year, they were quite sensitive to the prompts. And they also had a very small context window.
Yeah, I think it depends on which model you're using. And all of them are slightly different and they respond differently to different prompts. But I think the original GPT-4 and the original sort of pre-double models last year, they were quite sensitive to the prompts. And they also had a very small context window.
And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. Like you have the docs, you have the files that you add, you have the conversation history. And then there's a problem like how do you decide what you actually put in the prompt and when you have a limited space.
And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. Like you have the docs, you have the files that you add, you have the conversation history. And then there's a problem like how do you decide what you actually put in the prompt and when you have a limited space.
And so we have all of these pieces of information around the code base that would maybe be relevant in the prompt. Like you have the docs, you have the files that you add, you have the conversation history. And then there's a problem like how do you decide what you actually put in the prompt and when you have a limited space.