Many typical best practices still apply to GPT-4.1, such as providing context examples, making instructions as specific and clear as possible, and inducing planning via prompting to maximize model intelligence. However, we expect that getting the most out of this model will require some prompt migration. GPT-4.1 is trained to follow instructions more closely and more literally than its predecessors, which tended to more liberally infer intent from user and system prompts. This also means, however, that GPT-4.1 is highly steerable and responsive to well-specified prompts - if model behavior is different from what you expect, a single sentence firmly and unequivocally clarifying your desired behavior is almost always sufficient to steer the model on course. - https://cookbook.openai.com/examples/gpt4-1_prompting_guide
Few-shot learning lets you steer a large language model toward a new task by including a handful of input/output examples in the prompt, rather than fine-tuning the model. The model implicitly "picks up" the pattern from those examples and applies it to a prompt. When providing examples, try to show a diverse range of possible inputs with the desired outputs.
Typically, you will provide examples as part of a
developer
message in your API request. Here's an exampledeveloper
message containing examples that show a model how to classify positive or negative customer service reviews. - https://platform.openai.com/docs/guides/text?api-mode=responses#few-shot-learning
https://docs.anthropic.com/en/docs/build-with-claude/prompt-engineering/overview
https://www.lennysnewsletter.com/p/ai-prompt-engineering-in-2025-sander-schulhoff
https://learnprompting.org/blog/the_prompt_report
https://cookbook.openai.com/examples/gpt4-1_prompting_guide
https://platform.openai.com/docs/guides/text?api-mode=responses#few-shot-learning - Few Shot Learning
https://cookbook.openai.com/examples/partners/model_selection_guide/model_selection_guide