Moving Off Alteryx?

For decades, Alteryx has empowered organizations by placing data preparation and analytics capabilities directly into the hands of enterprise users. This self-service approach has allowed business teams to manage their own data pipelines without relying on central data platform teams for everyday tasks.

Originally developed in the 1990s for the Windows desktop—long before the advent of smartphones, cloud computing, or big data—Alteryx has been a good solution.

Customer priorities

When talking to customers looking to move off Alteryx, their priorities boil down to the following:

Modern product - users want a product that is born in the cloud and is:

  • Native. The product should run natively on and fully integrate with their Spark- or SQL-based cloud data platform, such as Databricks.
  • Intelligent. Generative AI has made developers more productive at programming. Customers want the same AI-powered productivity gains for data transformation.
  • Cost-effective. Customers want to avoid the costs of having to maintain a second system. They also want to eliminate the duplicate efforts of rewriting pipelines for production.

Built for production - data pipelines often start as ad-hoc but need to run in a production-ready system that is:

  • Well governed. The product should respect the security and governance policies centralized in cloud data platforms through tools such as Databricks Unity Catalog, and avoid separate policies to accommodate the duplication of data.
  • Scalable. Pipelines should be developed against the entire data set, allowing all data users to spot corner cases and eliminating the need for data engineers to rewrite pipelines for production. The system should also leverage the full processing power of the enterprise cloud data platform.
  • Change management ready. Ad-hoc development can be easy. But to put pipelines in production users need to have environments for dev and prod, git for version control and tests, and a standard mechanism to orchestrate pipelines and deploy them to production.

Easy to Migrate - users want to leverage their existing workflows and move to a system with:

  • Zero lock-in. Users want to avoid having their pipeline business logic locked into a proprietary format. They instead want to build pipelines that are backed by cloud-native open-source code that is stored on git.
  • Automated migration. Users want a simple, automated migration path to transition their existing workflows, reducing the effort and costs of having to rewrite complex, mission-critical pipelines.
  • Familiar interface. A visual interface that’s familiar to users lowers the friction of having to learn a new tool and eliminates adoption gaps during the switchover.
LIVE WEBINAR

How Amgen Modernized Financial Insights by Moving from Alteryx to Prophecy and Databricks

Watch the webinar

What are the solutions?

When moving self-service data preparation to the cloud, organizations are looking for a enterprise-grade replacement. There are two main options:

  • Write SQL code. Some line of business teams have users that can write SQL code and products such as dbt-core can standardize the way SQL is written. However, SQL for multi-step data preparation is more complex than SQL for simple lookups. And skilled data engineers are still needed for the management and orchestration of SQL-based pipelines, making this approach a non-starter for those used to self-service.
  • Use Prophecy. Prophecy provides the right solution for data transformation, the Data Transformation Copilot runs natively on your cloud data platform providing productivity for line of business users that is well governed, scalable, and cost-effective.
Next: Why Prophecy?