In-house text annotation for Natural Language Processing (NLP) models is becoming increasingly viable due to advancements in transfer learning, active learning, and programmatic supervision, which reduce the need for extensive labeled datasets and enhance data quality control. Previously, the outsourcing of data labeling was common due to the large volume required and the high costs and complexity of managing it internally. However, these new techniques allow for efficient labeling with fewer data points by focusing on the most impactful data and using pre-trained models that require minimal fine-tuning. This shift towards in-house annotation not only ensures better quality and privacy control but also leverages domain expertise more effectively. Humanloop's Programmatic platform exemplifies how user-friendly interfaces can facilitate rapid annotation, supporting the trend towards in-house solutions by providing tools that integrate active learning and quick feedback loops for machine learning engineers.