Discussion about this post

User's avatar
David Moser's avatar

Karpathy sounds like a gaslighting victim. First the AI bros tell us we're close to AGI and the Bots are at PhD level. Then, when the tools randomly break it's our fault for not instructing them correctly.

If I had a PhD level worker in my company who consistently makes mistakes and then blames it on my instructions I'd let them go.

We need improvements in the tool, like asking questions when the prompt is missing information or when a clarification may simplify the solution. We don't need more prompt engineering.

Dan McRae's avatar

you make a lot of complicated things clear in this article. thanks

No posts

Ready for more?