We conducted an experiment to evaluate techniques for maximizing Claude's recall over long documents, with a focus on improving its performance in multiple-choice question-and-answer (Q&A) pairs. The goal was to identify effective prompting strategies that can help Claude accurately recall specific pieces of information from a long document. We tested four different prompting strategies: base, nongov examples, two examples, and five examples, both with and without the use of a scratchpad to pull relevant quotes. Our results showed that using many examples, pulling relevant quotes into the scratchpad, and providing contextual examples can significantly improve Claude's recall performance on both short and long context lengths. Additionally, we found that there is a monotonic inverse relationship between performance and the distance of the relevant passage to the question and the end of the prompt for Claude Instant, while Claude 2 performance shows a small dip in the middle. The experiment highlights the importance of careful prompting and using relevant quotes to improve the accuracy of long-context Q&A prompts.