Lesson 2.13

Decimal data type C# - programming example


Objective

What is Decimal data type in c#?
How to declare and initialize
Programming examples
Conversion and Casting

What is Decimal Data Types in C#?

"Decimal is 128-bit data type which is derived from System.Decimal Class. It stores value between (-7.9 x 1028 to 7.9 x 1028) / (100 to 28). This data type is more appropriate for calculating financial and monetary value."

TypeApproximate RangePrecision.Net Framework type
decimal(-7.9 x 1028 to 7.9 x 1028) / (100 to 28)28-29 digitsSystem.Decimal

Declaration and Initialization

You can declare Decimal data type as follow:

		  
 decimal num1=35; // incorrect (×) - Compiler Error
 decimal num1=35.23; //incorrect (×) - Compiler Error
 decimal num1=35M; //you must add suffix m or M in decimal value. Correct (√)
		  

It is mandatory to add m or M suffix at the end of value otherwise the value will be calculated as double and raise compiler error as follow:

Compiler Error:  Literal of type double cannot be implicitly converted to type 'decimal'; use an 'M' suffix to create a literal of this type
		  

Programming Example

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace decimal_data_type
{
    class Program
    {
        static void Main(string[] args)
        {
            decimal d1, d2, result;
            d1 = 3898.7001M;
            d2 = 1273.663M;
            result = d1 / d2;
            Console.WriteLine(result);
            Console.ReadLine();
        }
    }
}
		  

Output

3.0610138631647460906063848915
_

Conversion

1. Implicit Conversion - No need to convert when left hand side data type 
  is:   Decimal
  
  Example:
  
  Decimal a=5, b=6;
  Decimal result=a+b;

2. Explicit Conversion - You need to convert when left hand side data type is: byte, sbyte, short, ushort, int, uint, long, decimal and char Example: Decimal a=5, b=6; int result=(int)(a+b); Note: When converted Decimal value into other integral type, the point precision gone.

Passing Decimal value as parameter

When calling a function which requires Decimal value in parameter you must cast value into respective datatype to avoid compilation error..

Programming Example

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading.Tasks;

namespace decimal_data_type
{
    class Program
    {
        static void Main(string[] args)
        {
            decimal var1=342.52M;
            ShowNumber((int)(var1)); // Correct(√)

            ShowNumber(var1); // Incorrect(×)
        }

        public static void ShowNumber(int num1)
        {
            Console.WriteLine(num1);
            Console.ReadLine();
        }
    }
}
		  

Output

342
_

It is best habbit to cast or convert data into respective datatype everywhere. It will save you from unnecessary runtime errors and makes your code more robust.

Explanation

In the above example we have created a function called ShowNumber(int num1) which requires an integer type value as parameter and then display the number.

We have called this function in our main class in which we casted decimal value into int type before passing.
 decimal var1=342M;
 ShowNumber((int)(var1)); // Correct(√)

 If we don’t cast it will raise compilation error as follow :
 ShowNumber(var1); // Incorrect(×)

 Compilatation Error :Error 1: The best overloaded method match for 'decimal_data_type.Program.ShowNumber(int)' has some invalid arguments
 Error 2: cannot convert from 'decimal' to 'int'
  

Summary

Decimal is 128-bit data type which stores value between (-7.9 x 1028 to 7.9 x 1028) / (100 to 28). It is mostly used calculating financial or monetary data. In the next chapter we will discuss on char data type.

BackNext