Gretel implemented a practical attack on synthetic data models to evaluate their ability to protect sensitive information, using a method described in Carlini et al.'s work on neural networks. The study involved a credit card fraud detection dataset with sensitive features like the last four digits of credit card numbers and cardholder names. The team tested four models with varying privacy settings, including a Vanilla Model, a DP Model, a Noisy DP Model, and a Filtered DP Model, each assessed for their ability to resist memorization of secret values, or "canaries." Results showed that while the Noisy DP Model offered the best privacy protection, it sacrificed accuracy, whereas the Vanilla Model achieved high accuracy but with less privacy protection. The DP Model struck a balance between accuracy and privacy, making it suitable when both aspects are important. The study highlights the trade-offs between privacy guarantees and model performance, suggesting that model selection depends on user priorities.