Company
Date Published
Author
Christian Kiesewetter
Word count
1852
Language
American English
Hacker News points
None

Summary

Organizations are increasingly adopting data-driven approaches to enhance the value of their data, optimize business results, and cut costs, but they face significant challenges such as managing the scale and cost of large data volumes, understanding data context, and meeting security requirements. Dynatrace addresses these challenges with its new OpenPipeline stream-processing technology, which revolutionizes data ingestion by allowing organizations to collect, preprocess, filter, and securely manage data at a massive scale. OpenPipeline enriches data with context, enhancing its quality and value by linking it to relevant data sources, and enables AI-driven insights and automation across various domains. By offering high-performance filtering, encryption, and transformative capabilities, OpenPipeline reduces storage costs and ensures data privacy and compliance, while its seamless integration with Dynatrace Grail allows for unparalleled data contextualization and analytics. The technology supports up to 500 TB of data ingestion per day per tenant, with plans to scale beyond a petabyte, and is available at no extra cost to Dynatrace SaaS customers on AWS or Azure.