Floating point values are stored in a format defined by the IEEE 754 floating point standard.

This format uses the excess-127 representation for 32 bit numbers and the excess-1023 representation for 64 bit numbers.

The typical format of a 32 bit floating point number is as below.

**Sign Bit** + **Exponent** + **Mantissa**

Sign will occupy one bit, Exponent 8 bits and Mantissa 23 bits.

Now let us see how the floating point value 2.3 is represented.

1) From the value it is clear that it is a positive number. A/c to this standard, if the value is positive then 0 is stored in the sign bit else 1 is stored.

2) The integer 2 is represented as 10 in binary. And if we convert the fractional part into binary [0.3] into binary, we will get 01001100110011001...

3) So, 2.3 is represented in binary as (10.01001100110011001..). It is clear that the pattern 1001 repeats. So, if we convert this to normalized form we have 1.001001*E^1.

4) According to the standard, the exponent value is biased by a fixed value. For 32 bit numbers 127 is added to the exponent and for 64 bit numbers 1023 added. In our example, the exponent 1 [E^1] is added with 127 and when converted the result to binary we will get 10000000.

5) So, 2.3 will be stored as shown below.

0 10000000 00100110 01100110 1010011

If you observe the above format,

a) 0 is stored in sign bit [MSB],

b) 10000000 [exponent + bias value -127] is stored in 8 bit exponent field

c) Mantissa [00100110 01100110 1010011 ] is stored in 23 bits. Observe that the pattern 1001 repeats until 23 rd bit is reached.

Note: similar is the procedure for 64 bit floating point number but with few changes.

Let me know if this answers your question.