Numeric overflow in Teradata occurs while handling numeric values in computation or transformation. Numeric overflow occurs when you load value which is bigger than the range for numeric datatype. You can solve the error by casting the column to bigger datatype. Maximum possible value is decimal(38,0).
Why Numeric overflow occurred during computation
Numeric overflow occurs because of 3 common reasons:
- When you load a value which is more than the permissible range of the column datatype
- When you load a value from bigger data type column into smaller data type column
- When you aggregate column value and it exceeds the possible maximum value of column datatype
Let’s dive deep into the problem statement and I will make sure that you know everything about the error which includes cause, reason and of course solution to numeric overflow error.
In Teradata, numeric values are
- BYTEINT
- SMALLINT
- INTEGER
- BIGINT
- DECIMAL/NUMERIC
- FLOAT/REAL/DOUBLE PRECISION
So whenever you get this error I want you to think about these columns and identify the columns used in query which are defined as above mentioned data type.
Important point to understand here is the storage space & permissible range of values for each column defined in the above table.
We will continue our discussion on this numeric overflow computation with an example now. So let’s create a table with some sample columns of number datatype and run some queries to reproduce the error and see the solution.
create volatile table chk_num_comp ( col_byteint byteint, col_smallint smallint, col_integer integer, col_bigint bigint ) on commit preserve rows;
DataType | Storage (BYTE) | Range |
---|---|---|
BYTEINT | 1 | -128 to 127 |
SMALLINT | 2 | -32768 to 32767 |
INTEGER | 4 | -2147483648 to 2147483647 |
BIGINT | 8 | -9223372036854775808 to 9223372036854775807 |
It is very important to understand the allowed value or range for any numeric datatype especially if you are fixing numeric computation error. Now you cannot remember the range all the time but you can quickly calculate it if required. For example BYTEINT occupies 1 BYTE i.e. 8 bits. So the range should be 2^8 ie. 256. But in Teradata, unlike some other RDBMS all the numeric datatypes are signed by default and you cannot change it. So the number should support negative values along with positive value. Hence the range becomes 2^7 for negative values & 2^7 for positive values. To calculate it using query , you can try below query:
select power(2,7)*-1 as min_value, power(2,7)-1 as max_value; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. min_value max_value ------------------------------ ------------------------------ -128 127
Min value has multiplication with -1 to get negative value in output. For Max value we have subtracted 1 to consider 0 as the value too. Similarly for other data type also you can calculate the permissible range.
select power(2,15)*-1 as min_value, power(2,15)-1 as max_value; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. min_value max_value ------------------------------ ------------------------------ -32768 32767 BTEQ -- Enter your SQL request or BTEQ command: select power(2,31)*-1 as min_value, power(2,31)-1 as max_value; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. min_value max_value ------------------------------ ------------------------------ -2147483648 2147483647 BTEQ -- Enter your SQL request or BTEQ command: select power(2,63)*-1 as min_value, power(2,63)-1 as max_value; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. min_value max_value ------------------------------ ------------------------------ -9223372036854776000 9223372036854775999
Now the BIGINT range shown above is not right and seems like some rounding issue with the output and correct value for BIGINT is -9223372036854775808 to 9223372036854775807
Let’s load a row into the table with maximum value for all the 4 datatype.
insert into chk_num_comp VALUES (127,32767,2147483647,9223372036854775807); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: SELECT * FROM chk_num_comp; *** Query completed. One row found. 4 columns returned. *** Total elapsed time was 1 second. col_byteint col_smallint col_integer col_bigint ----------- ------------ ----------- -------------------- 127 32767 2147483647 9223372036854775807
Whenever you will try to load a value which is more than permissible range mentioned in above table you will get some error. Now let’s see the possible error cases we can encounter.
Case 1: Load the value more than permissible range for datatype
insert into chk_num_comp VALUES (128,32767,2147483647,9223372036854775807); *** Failure 3520 A constant value in a query is not valid for column col_byteint. Statement# 1, Info =0 *** Total elapsed time was 1 second.
In the above example, we tried to insert “128” for byteint which gave the error. So if you don’t know the range for BYTEINT this error could be confusing because 128 is a valid number but not in range for BYTEINT column.
How to Fix not valid constant value in teradata ?
If the source value is correct, then change the datatype of the column to accommodate bigger values accordingly.
Case 2: Load the value from bigger number datatype to smaller number datatype
In this example, we will try to load byteint column from a bigint column and see what will happen.
insert into chk_num_comp (col_byteint) SELECT col_bigint FROM chk_num_comp; *** Failure 2617 Overflow occurred computing an expression involving chk_num_comp.col_bigint Statement# 1, Info =0 *** Total elapsed time was 1 second.
We got the overflow error. But does this mean we will always get the error when we will load a smaller number datatype from a bigger number datatype? That’s not true. If the value in the bigger column is in permissible range then smaller datatype shall be able to store it. Let’s see an example below in which we will delete the table and load a dummy row in it.
delete chk_num_comp; *** Delete completed. One row removed. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp(1,2,3,4); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp (col_byteint) SELECT col_bigint FROM chk_num_comp; *** Insert completed. One row added. *** Total elapsed time was 1 second.
So you can see the value in BIGINT column was “4” which is in permissible range for BYTEINT column. Hence it was able to load it successfully.
How to Fix Overflow occurred computing an expression in Teradata ?
It is ok to store values coming from bigger datatype into smaller datatype as long as the value is in permissible range. It is a good tuning technique which optimised storage space. However if you do expect bigger values then you must change the column datatype accordingly.
Case 3: Numeric overflow during Aggregation
In this we will see some of the common aggregate functions like count, sum, average and how we can handle this error during computation. Let’s delete and load dummy rows into the table for this example.
delete chk_num_comp; *** Delete completed. One row removed. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp(1,1,1,1); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp VALUES (127,32767,2147483647,9223372036854775807); *** Insert completed. One row added. *** Total elapsed time was 1 second.
Let’s start with the sum operation first. So for byteint column we have 2 rows with value 127 & 1. If we do sum of it then the result is 128 which is more than permissible range. So will it throw the error ? Let’s see with below example.
SELECT sum(col_byteint) AS sum_col_byteint FROM chk_num_comp; *** Query completed. One row found. One column returned. *** Total elapsed time was 1 second. sum_col_byteint --------------- 128
I don’t see any error above. Why ? I will explain it but let’s run the SUM function for SMALLINT & INTEGER column first before we find the reason behind success & failure during SUM operation.
SELECT sum(col_smallint) AS sum_col_smallint FROM chk_num_comp; *** Query completed. One row found. One column returned. *** Total elapsed time was 1 second. sum_col_smallint ---------------- 32768 BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(col_integer) AS sum_col_integer FROM chk_num_comp; *** Failure 2616 Numeric overflow occurred during computation. Statement# 1, Info =0 *** Total elapsed time was 1 second.
So we can see above that for SMALLINT column , we got the output. But for INTEGER we got the numeric overflow error. But why ? And the answer is in the return type of SUM function which is INTEGER.
So when the sum function added both rows with value 2147483647 + 1 , the output was more than permissible range of INTEGER column. Hence the numeric overflow error. So what about BIGINT ? For BIGINT, the return type of SUM function is BIGINT and if the calculation surpasses the BIGINT range then the same error will come for BIGINT column as well. Let’s see the example below for BIGINT.
SELECT sum(col_bigint) AS sum_col_bigint FROM chk_num_comp; *** Failure 2616 Numeric overflow occurred during computation. Statement# 1, Info =0 *** Total elapsed time was 1 second.
To check the return type of the SUM function, we can run below query and confirm it.
delete chk_num_comp; *** Delete completed. 2 rows removed. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp(1,1,1,1); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(col_byteint) AS sum_col_byteint , TYPE(sum_col_byteint) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_byteint Type(sum_col_byteint) --------------- --------------------------------------- 1 INTEGER BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(col_smallint) AS sum_col_smallint , TYPE(sum_col_smallint) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_smallint Type(sum_col_smallint) ---------------- --------------------------------------- 1 INTEGER BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(col_integer) AS sum_col_integer , TYPE(sum_col_integer) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_integer Type(sum_col_integer) --------------- --------------------------------------- 1 INTEGER BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(col_bigint) AS sum_col_bigint , TYPE(sum_col_bigint) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_bigint Type(sum_col_bigint) -------------------- --------------------------------------- 1 BIGINT
How to fix Numeric Overflow occurred during computation in Teradata ?
You may want to explicit cast the datatype to bigger datatype while applying SUM function in order to avoid the numeric overflow error during computation. Let’s see the example below for BIGINT column.
delete chk_num_comp; *** Delete completed. 2 rows removed. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp VALUES (1,1,1,1); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: insert into chk_num_comp VALUES (127,32767,2147483647,9223372036854775807); *** Insert completed. One row added. *** Total elapsed time was 1 second. BTEQ -- Enter your SQL request or BTEQ command: select col_bigint from chk_num_comp; *** Query completed. 2 rows found. One column returned. *** Total elapsed time was 1 second. col_bigint -------------------- 1 9223372036854775807 BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(CAST(col_bigint AS decimal(38,0))) AS sum_col_integer , TYPE(sum_col_integer) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_integer Type(sum_col_integer) ---------------------------------------- --------------------------------------- 9223372036854775808. DECIMAL(38,0) BTEQ -- Enter your SQL request or BTEQ command: SELECT sum(CAST(col_bigint AS decimal(38,0) FORMAT 'Z(38)9')) AS sum_col_integer , TYPE(sum_col_integer) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. sum_col_integer Type(sum_col_integer) --------------------------------------- --------------------------------------- 9223372036854775808 DECIMAL(38,0)
In the last query , I have applied FORMAT to ignore the decimal point “.” which was coming in the output. So in this manner you can fix the error related with numeric overflow in SUM & AVERAGE aggregate function.
For COUNT you can just cast the count function to overcome this error in place of column. See the example below:
--wrong way as the output is still INTEGER type. SELECT count(CAST(col_integer AS bigint)) AS count_col_integer , TYPE(count_col_integer) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. count_col_integer Type(count_col_integer) ----------------- --------------------------------------- 2 INTEGER BTEQ -- Enter your SQL request or BTEQ command: --correct way as the output is now in BIGINT type. SELECT CAST(count(col_integer) AS bigint) AS count_col_integer , TYPE(count_col_integer) FROM chk_num_comp; *** Query completed. One row found. 2 columns returned. *** Total elapsed time was 1 second. count_col_integer Type(count_col_integer) -------------------- --------------------------------------- 2 BIGINT
the conversion of the varchar value overflowed an int column
If you are storing numeric values in varchar column then you must be very careful while converting it into numeric data type like integer. If the value is more than the range allowed for the numeric column then it will result in overflow error.
Very common example is the mobile number. In many tables I have seen mobile number is stored in varchar column. However if it is converted to integer it may throw overflow error. Integer max value is 2147483647 and mobile number are generally 10 digit numbers. So be careful while converting varchar value into int column to avoid overflow error.
That’s it guys. This is all I planned to cover in this post. Now you know the reason behind the numeric overflow error and the solution to it as well. Let me know if you see any more case in the comment box below and I will be happy to cover it too.
If you prefer to watch video then check the video below:
Teradata Interval field overflow error & the solution
Interval is the datatype in Teradata which represents duration between 2 time components. We can add INTERVAL to DATE/TIMESTAMP column values to get to new DATE/TIMESTAMP. Also whenever we subtract two timestamp in Teradata, output is of INTERVAL type. Just like any other data type INTERVAL too has permissible range and if the value exceeds it we get “Interval Field Overflow” error.
Now you know the real reason of Interval Field Overflow error , let us see few examples which explains it.
Create a dummy table with some data:
create volatile table dummy_ts ( start_ts timestamp , end_ts timestamp ) on commit preserve rows; insert into dummy_ts select '2016-08-10 22:12:11','2016-08-12 20:17:21'; insert into dummy_ts select '2015-01-14 20:17:21','2017-05-14 12:16:54'; insert into dummy_ts select '2016-08-17 20:17:21','2019-01-16 18:17:21'; insert into dummy_ts select '2015-01-22 20:17:21','2015-07-22 22:45:11'; select * from dummy_ts; start_ts end_ts 2015-01-14 20:17:21.000000 2017-05-14 12:16:54.000000 2016-08-17 20:17:21.000000 2019-01-16 18:17:21.000000 2016-08-10 22:12:11.000000 2016-08-12 20:17:21.000000 2015-01-22 20:17:21.000000 2015-07-22 22:45:11.000000
Whenever we subtract 2 timestamp columns we must specify the INTERVAL type format of the output else we will get “Invalid operation for DateTime or Interval” error.
select start_ts,end_ts, end_ts - start_ts from dummy_ts;
Error: “Invalid operation for DateTime or Interval”
Let’s specify the DAY difference between 2 timestamp values
select start_ts,end_ts, end_ts - start_ts DAY as Day_Diff from dummy_ts;
Error: “Interval Field Overflow”
Since DAY means DAY(1) by default, we have to specify the precision to which we are expecting result. We can maximum have difference of 9999 DAYS between 2 timestamp in INTERVAL data type. In very exceptional case, you may need INTERVAL more than 9999 days. You can specify DAY(4) to specify DAY difference in the output with precision upto 4 digits. Hence 9999 is max value.
So now let’s subtract 2 timestamp column and get the difference:
select start_ts,end_ts, end_ts - start_ts DAY(4) as Day_Diff, type(Day_Diff) from dummy_ts; start_ts end_ts Day_Diff Type(Day_Diff) 2015-01-14 20:17:21 2017-05-14 12:16:54 851 INTERVAL DAY(4) 2016-08-17 20:17:21 2019-01-16 18:17:21 882 INTERVAL DAY(4) 2016-08-10 22:12:11 2016-08-12 20:17:21 2 INTERVAL DAY(4) 2015-01-22 20:17:21 2015-07-22 22:45:11 181 INTERVAL DAY(4)
To get output from DAY to HOUR
select start_ts,end_ts, end_ts - start_ts DAY(4) to HOUR as Day_Diff, type(Day_Diff) from dummy_ts; start_ts end_ts Day_Diff Type(Day_Diff) 2015-01-14 20:17:21 2017-05-14 12:16:54 850 16 INTERVAL DAY(4) TO HOUR 2016-08-17 20:17:21 2019-01-16 18:17:21 881 22 INTERVAL DAY(4) TO HOUR 2016-08-10 22:12:11 2016-08-12 20:17:21 1 22 INTERVAL DAY(4) TO HOUR 2015-01-22 20:17:21 2015-07-22 22:45:11 181 02 INTERVAL DAY(4) TO HOUR
To get output from DAY to MINUTE
select start_ts,end_ts, end_ts - start_ts DAY(4) to MINUTE as Day_Diff, type(Day_Diff) from dummy_ts; 2015-01-14 20:17:21 2017-05-14 12:16:54 850 15:59 INTERVAL DAY(4) TO MINUTE 2016-08-17 20:17:21 2019-01-16 18:17:21 881 22:00 INTERVAL DAY(4) TO MINUTE 2016-08-10 22:12:11 2016-08-12 20:17:21 1 22:05 INTERVAL DAY(4) TO MINUTE 2015-01-22 20:17:21 2015-07-22 22:45:11 181 02:28 INTERVAL DAY(4) TO MINUTE
To get output from DAY to SECOND
select start_ts,end_ts, end_ts - start_ts DAY(4) to SECOND as Day_Diff, type(Day_Diff) from dummy_ts; 2015-01-14 20:17:21 2017-05-14 12:16:54 850 15:59:33.000000 INTERVAL DAY(4) TO SECOND(6) 2016-08-17 20:17:21 2019-01-16 18:17:21 881 22:00:00.000000 INTERVAL DAY(4) TO SECOND(6) 2016-08-10 22:12:11 2016-08-12 20:17:21 1 22:05:10.000000 INTERVAL DAY(4) TO SECOND(6) 2015-01-22 20:17:21 2015-07-22 22:45:11 181 02:27:50.000000 INTERVAL DAY(4) TO SECOND(6)
Important point to note here is the milliseconds component of interval. It has taken SECOND(6) as default as the timestamp format defined in the table. If you don’t want milliseconds then you have to try other alternative. You cannot truncate seconds in INTERVAL datatype.
select start_ts,end_ts, end_ts - start_ts DAY(4) to SECOND(0) as Day_Diff, type(Day_Diff) from dummy_ts;
Error: “Interval Field Overflow”
You can explicitly convert TIMESTAMP(6) to TIMESTAMP(0) and then apply INTERVAL SECOND to get desired output. To get rid of milliseconds portion or to want just the SECOND you can try something like:
select start_ts,end_ts, cast(cast(end_ts as char(19)) as timestamp(0))- cast(cast(start_ts as char(19)) as timestamp(0)) DAY(4) to SECOND(0) as Day_Diff, type(Day_Diff) from dummy_ts; 2015-01-14 20:17:21 2017-05-14 12:16:54 850 15:59:33 INTERVAL DAY(4) TO SECOND(0) 2016-08-17 20:17:21 2019-01-16 18:17:21 881 22:00:00 INTERVAL DAY(4) TO SECOND(0) 2016-08-10 22:12:11 2016-08-12 20:17:21 1 22:05:10 INTERVAL DAY(4) TO SECOND(0) 2015-01-22 20:17:21 2015-07-22 22:45:11 181 02:27:50 INTERVAL DAY(4) TO SECOND(0)
Also before applying any other function on INTERVAL datatype you may want to convert it into compatible datatype first. Since most of the functions are not applicable on INTERVAL you may get INVALID OPERATION error. Like I tried to apply TRIM function to INTERVAL output in the query.
select start_ts,end_ts, trim(cast(cast(end_ts as char(19)) as timestamp(0)) - cast(cast(start_ts as char(19)) as timestamp(0)) DAY(4) to SECOND(0)) as Day_Diff, type(Day_Diff) from dummy_ts;
Error: “Invalid operation for DateTime or Interval”
I can convert INTERVAL to VARCHAR and then may apply String functions on it. Like below:
select start_ts,end_ts, trim(cast((end_ts - start_ts DAY(4) to SECOND(6)) as varchar(30))) as Day_Diff, type(Day_Diff) from dummy_ts; start_ts end_ts Day_Diff Type(Day_Diff) 2015-01-14 20:17:21.000000 2017-05-14 12:16:54.000000 850 15:59:33.000000 VARCHAR(30) 2016-08-17 20:17:21.000000 2019-01-16 18:17:21.000000 881 22:00:00.000000 VARCHAR(30) 2016-08-10 22:12:11.000000 2016-08-12 20:17:21.000000 1 22:05:10.000000 VARCHAR(30) 2015-01-22 20:17:21.000000 2015-07-22 22:45:11.000000 181 02:27:50.000000 VARCHAR(30)
So you can see in above example, INTERVAL is converted to VARCHAR and then we can apply string functions like TRIM or SUBSTRING etc on it.
Now you know why do we get “Interval Field Overflow” or “Invalid operation for DateTime or Interval” error.
Hope this helps.
Convert number into Time in Teradata
While calculating measures you may want to convert seconds to TIME. Say you want to know how many HOURS,MINUTES & Seconds constitute this number. Example you may want to convert 500 seconds to hh:mi:ss format. This can be done easily by using “INTERVALs”. Let’s quickly see how to do it
select 5000 *CAST( INTERVAL '1' SECOND AS INTERVAL Day TO second(0)) ; OUTPUT: 0 01:23:20
This means 5000 seconds means 1 hour 23 minutes and 20 seconds.
Another way of writing same command:
select 5000 *INTERVAL '00 00:00:01' DAY TO second ; OUTPUT: 0 01:23:20
Let’s increase the value of number (i.e. seconds)
select 500000 *CAST( INTERVAL '1' SECOND AS INTERVAL Day TO second(0)) ; OUTPUT: 5 18:53:20
This means 500000 seconds constitutes 5 Days 18 Hours 53 Minutes and 20 Seconds.
You can change the INTERVAL RANGE to get output in desired format. Like I just want output in hh:mi:ss format. Then we can run below query:
select 500000 *CAST( INTERVAL '1' SECOND AS INTERVAL HOUR TO second(0)) ; OUTPUT: 138:53:20
This means 500000 seconds constitutes 138 Hours 53 Minutes and 20 Seconds.
There can be another use-case where you already have “TIME” data stored as number.
Like 65320 i.e. 6th Hour 53rd Minute & 20th second.
Now you just want to change this number to proper time format. Then you can use below command.
If you just want to change the appearance of number then try below command:
select 65320 (format '99:99:99') OUTPUT: 06:53:20
But the return data type is INTEGER only. If you want “actual” TIME column then cast it to TIME(0)
select cast(trim(65320 (format '99:99:99')) as TIME(0)) OUTPUT: 06:53:20
Leave a Reply