Abliteration is a technique used to modify language models by removing refusal behaviors, and recent advancements have improved its efficacy while maintaining model capabilities. The new method, called norm-preserving biprojected abliteration, focuses on removing only the directional component of refusal behaviors while preserving the weight norms of the model's layers. This approach has shown improved reasoning performance and effective refusal removal in benchmarks when applied to the Gemma3 12B Instruct model. By maintaining the magnitude of weights, the method respects the model's learned importance structure and minimizes unintended consequences. The technique employs a heuristic for layer selection based on signal-to-noise ratio and cosine dissimilarity, ensuring efficient multi-layer interventions that prevent self-repair mechanisms from reintroducing refusal behaviors. This refined approach highlights the potential for removing directionally-encoded safety constraints to unlock latent reasoning capabilities, although it underscores the need for careful consideration of safety implications.