The article explores the application of GPT-NER, a method for few-shot named entity recognition (NER) using large language models (LLMs), in a truly few-shot scenario with the Few-NERD dataset and Llama 2 models available on the Clarifai platform. Few-shot NER aims to identify and categorize named entities with minimal labeled data, a challenging task for traditional deep learning models. The study assesses the effectiveness of various Llama 2 model sizes and the influence of the number of few-shot examples on performance. It finds that while larger models like the 70B version achieve higher recall, smaller models like the 13B version excel in precision, though all models benefit from more examples. Challenges such as the need for separate prompts for different entity types and variations in model output consistency are noted, suggesting areas for improvement through advanced prompt engineering and other techniques. The research highlights the potential of GPT-NER for enhancing few-shot NER but also points to future areas of exploration, including self-verification techniques and fine-grained classification.