How to move: Alteryx to Prophecy

Prophecy provides a complete solution for data transformation that natively leverages cloud data platforms like Databricks. With Prophecy, you can modernize data preparation and deliver a solution that provides the following:

  • All data users are enabled. Prophecy enables all your data users, including data analysts in lines of business teams to self-serve and prepare data for AI and analytics.
  • Faster time to market. Pipelines built by data analysts match the quality of experienced data engineers and run at scale on datasets in Databricks.
  • Native scale and performance. Pipelines built visually turn into high-quality code that runs at scale on Databricks, matching the scale and performance of pipelines coded by the best data engineers.
  • Cost-effective. You get a single solution for the central data platform and line of business teams. This eliminates the cost of managing a second system and removes the cost of rewriting for scale.
  • Simple migration. Prophecy's Migration Copilot provides an automated migration path to move your users and their existing pipelines to the new solution.
ON-DEMAND DEMO

Prophecy Migration Demo

See Prophecy Migration Copilot converting business logic from Alteryx automatically, and assisting in last mile fixes
Watch the product demo

Architecture

Many customers who have Alteryx desktop and Alteryx server on premises have also adopted a cloud data platform such as Databricks. This can result in two data stacks.

Moving to Prophecy simplifies your data stack by enabling your data analysts on Databricks. Since all pipelines are also native code in Prophecy, the same git repository is used to store all the data pipelines generated by line of business users or coded by data engineers. And moving data pipelines developed by data analysts into production means running the same central process of test and CI, CD.

Will it work? The evaluation checklist

Once customers understand the value and benefits of using Prophecy, we also want to ensure that the solution will meet the needs of their organization.

The first step is to confirm that the basic features are available for existing users to seamlessly succeed:

  • Develop pipelines. We want to make sure that Prophecy will provide an interface that delights your line of business users. We'll go over Prophecy's visual operators for data transformation, build pipelines, and ensure the functionality will work for your users.
  • Orchestrate pipelines. Pipelines have steps beyond data transformation. We'll run real orchestration for your workflows. For example: operators that can wait on a folder on Amazon S3 bucket, trigger when a new file shows up,  run data transformation, push an extract for Tableau, and email the user on the status.
  • Deploy to production. Pipelines are usually built and tested in the development environment and then moved to production where they might run every day on production data. We'll test the process to move changes from development to production in your environment to ensure it is accessible and robust.

The second step is to confirm the benefits of modern architecture will be realized.

  • Copilot AI. Prophecy AI enables more users and makes them more productive, and faster. We'll show your team how Prophecy's AI can predict the next transformation, convert text prompts to expressions and pipelines, and generate tests and documentation.
  • Governed and Native experience. Prophecy will respect the governance setup natively by your data platform such as using Unity Catalog in Databricks. We'll develop pipelines that will run step-by-step inside Databricks and leverage the platform's established governance.

The third step is to understand the migration effort that must be undertaken to move.

  • User adoption. Users of all kinds will try Prophecy to confirm that the product is familiar, meets their needs, and will have a quick path to adoption.
  • Automated Migration. Prophecy’s Migration Copilot can automate most of the data pipeline logic. Source data must also be moved to the cloud and some example workflows can be tried to estimate the effort required in the transition.
Next: Develop Pipelines