It's been few years since I have been working on HIVE, Impala, Spark SQL, PySpark, Redshift and in the journey so far I have migrated many applications in different RDBMS like Teradata, Netezza, Oracle to cloud or on-premise Hadoop environment for various clients. I have seen the change in client preferences from Hive to ACID Hive to Spark SQL to PySpark. In this post I will share my observation so far when it comes to migrating existing Teradata application to Hadoop.
First question that shall come to your mind is that if you really want to move to Hadoop ?
Few common reasons for moving to Hadoop are:
- Teradata licence expiring soon so looking for alternatives before renewal.
- Teradata capacity (storage + computation) is choked and room for adding new applications is not much.
- Migrate some heavy computation to Hadoop i.e. High AMPCPU consuming jobs to Hadoop. Thus complementing existing Teradata warehouse with Hadoop.
- Migrate jobs that involve high volume data or to free some storage space in Teradata.
- Non-critical jobs or specific business applications which may require further analytics presently not supported or minimum support is available in Teradata.
- Application with many adhoc requests hogging Teradata resources.
- Semi structure or unstructured data into Hadoop for future analytic support.
Before you move to Teradata, few technology related points you may have to consider.
- Do you want to move to cloud or on-premise Hadoop ?
- What shall be the tech-stack you want in Target environment ?
- Do you have internal team or capability to support chosen technology ?
- What are the pros and cons of each technology ?
- Which technology suits best as per your requirement ?
- How much budget you have for testing the first phase ?
Many clients prefer being Technology agnostic so you may not want to abide by a single vendor or hadoop distributor. A lot depends on your use-case which you have picked for moving from Teradata. I have worked with client, who intentionally involved Cloudera and Databricks along with AWS EMR just to have flexibility at later stage to move away from one vendor if something does not go as expected. I think that was a smart decision.
Another client, which migrated Teradata dimension model to EMR – Spark. However for the business adhoc queries he preferred Redshift. So the ETL was done in EMR spark and the processed data was pushed to Redshift for business queries. That makes sense because if you prefer response in an express manner for your queries, spark or hive is not the ideal technology. Impala, Redshift may have better response time for adhoc business queries than Spark/Hive.
Let's talk about the technical consideration from the SQL perspective you must consider before moving to Hadoop. You must understand that Teradata is built to run SQL queries at lightning fast speed. Teradata follows ANSI SQL standard with high overlapping percentage. In addition to that, Teradata also has extension to SQL which definitely makes SQL developer life easy. However Hadoop was NOT built to run SQL queries hence HIVE/Spark has yet to do lot of catching-up when it comes to supporting SQL standards. In short, SQL queries which are supported in Teradata may not work on Hive/Spark.
Few common examples of SQL components that are not supported in HIVE:
- Recursive Queries
- Nested Subqueries and multiple Subqueries
- Outer Joins with Non-equi join conditions
- Macros
- Stored Procedures
- Views don't support DML operations.
- Datatypes like Time, Period and other complex types.
- Constraints like Primary Key,Foreign Key, Unique, NOT NULL
- SET tables
- Triggers
- Update/Merge/Delete (need HIVE transactional tables to support)
- QUALIFY clause
- Identity columns
Few other SQL differences you may have to consider are :
- String case-sensitiveness. Teradata is case-insensitive by default however HIVE is case-sensitive.
- Also few things are supported however the syntax is different like Integer becomes Int in datatype.
- Similarly INSERT query syntax changes if the table is partitioned in HIVE.
- You may want to figure out any custom or Teradata in-built UDF (user-defined function) you are using which may not be available in HIVE.
- SET operator like MINUS
- UNION was not supported in HIVE 1.1 but 1.2 onwards it is supported.
- Concat "||" to be replaced with CONCAT function
Although I have mentioned lot of points above which are not supported or indirectly supported I can tell you that even after these challenges , most of the SQL can be converted to HIVE compatible query.
After SQL syntax and components, I would like to talk about SQL Performance as well. Since I said it before Hadoop was not built for running usual SQL queries, so definitely you may see some obvious difference in execution time of SQL queries on HIVE when compared to Teradata. The execution time difference is bigger for queries like UPDATE/DELETE/MERGE. Other challenge I see here is Teradata has mature a lot in terms of handling complex queries and giving response in seconds. There are lots of material available on web for Teradata performance tuning which "actually work". However for HIVE, you will find same performance tuning points every where and most of them won't help much in actual scenarios. So I will recommend not to start with those applications which are time-bound i.e. time critical jobs. For example, there is a business application for which you must process ETL pipeline within 1 hour of receiving files from Source application. I will not recommend to start with this application. At the same time, if there is another application which is daily job and have bandwidth of 16-20 hours to complete on daily basis then it is recommended to try it on Hadoop. Note: 16-20 hours does not mean that job will take time to complete. But it is buffer time in which you can re-run jobs multiple times in a day in case of failures.
Are you planning to move to Hadoop from Teradata ? Or do you already have some experience ? I would love to hear about your comments and experience comparing Teradata & Hadoop Hive.
I have tried to cover topics at high level however if you wish to know anything specific let me know by leaving a comment. Hope this helps.