The IEEE 754
specification defines many floating point types, including: binary16
, binary32
, binary64
and binary128
. Most developers are familiar with binary32
(equivalent to float
in C#) and binary64
(equivalent to double
in C#). They provide a standard format to represent a wide range of values with a precision acceptable for many applications. .NET has always had float
and double
and with .NET 5 Preview 7, we’ve added a new Half
type (equivalent to binary16
)!
A Half
is a binary floating-point number that occupies 16 bits. With half the number of bits as float, a Half
number can represent values in the range ±65504. More formally, the Half
type is defined as a base-2 16-bit interchange format meant to support the exchange of floating-point data between implementations. One of the primary use cases of the Half
type is to save on storage space where the computed result does not need to be stored with full precision. Many computation workloads already take advantage of the Half
type: machine learning, graphics cards, the latest processors, native SIMD libraries etc. With the new Half
type, we expect to unlock many applications in these workloads.
Let’s explore the Half
type:
The 16 bits in the Half
type are split into:
- Sign bit: 1 bit
- Exponent bits: 5 bits
- Significand bits: 10 bits (with 1 implicit bit that is not stored)
Despite that fact that the significand is made up of 10 bits, the total precision is really 11 bits. The format is assumed to have an implicit leading bit of value 1 (unless the exponent field is all zeros, in which case the leading bit has a value 0). To represent the number 1 in the Half
format, we’d use the bits:
0 01111 0000000000 = 1
The leading bit (our sign bit) is 0
, indicating a positive number. The exponent bits are 01111
, or 15
in decimal. However, the exponent bits don’t represent the exponent directly. Instead, an exponent bias is defined that lets the format represent both positive and negative exponents. For the Half
type, that exponent bias is 15
. The true exponent is derived by subtracting 15
from the stored exponent. Therefore, 01111
represents the exponent e = 01111 (in binary) - 15 (the exponent bias) = 0
. The significand is 0000000000
, which can be interpreted as the number .significand(in base 2)
in base 2, 0
in our case. If, for example, the significand was 0000011010 (26 in decimal)
, we can divide its decimal value 26
by the number of values representable in 10 bits (1 << 10)
: so the significand 0000011010 (in binary)
is 26 / (1 << 10) = 26 / 1024 = 0.025390625
in decimal. Finally, because our stored exponent bits (01111)
are not all 0
, we have an implicit leading bit of 1
. Therefore,
0 01111 0000000000 = 2^0 * (1 + 0/1024) = 1
Half
value are interpreted as -1^(sign bit) * 2^(storedExponent - 15) * (implicitBit + (significand/1024))
. A special case exists for the stored exponent 00000
. In this case, the bits are interpreted as -1^(sign bit) * 2^(-14) * (0 + (significand/1024))
. Let’s look at the bit representations of some other numbers in the Half
format:Smallest positive non-zero value
0 00000 0000000001 = -1^(0) * 2^(-14) * (0 + 1/1024) ≈ 0.000000059604645
(Note the implicit bit is 0 here because the stored exponents bits are all 0)
Largest normal number
0 11110 1111111111 = -1^(0) * 2^(15) * (1 + 1023/1024) ≈ 65504
Negative Infinity
1 11111 0000000000 = -Infinity
1 00000 0000000000 = -0
0 00000 0000000000 = +0
Conversions to/from float/double
Half
can be converted to/from a float/double by simply casting it:float f = (float)half
;Half
h = (Half
)floatValue;
Any Half
value, because Half
uses only 16 bits, can be represented as a float/double
without loss of precision. However, the inverse is not true. Some precision may be lost when going from float/double
to Half
. In .NET 5.0, the Half
type is primarily an interchange type with no arithmetic operators defined on it. It only supports parsing, formatting and comparison operators. All arithmetic operations will need an explicit conversion to a float/double
. Future versions will consider adding arithmetic operators directly on Half
.
As library authors, one of the points to consider is that a language can add support for a type in the future. It is conceivable that C# adds a half
type in the future. Language support would enable an identifier such as f16
(similar to the f
that exists today) and implicit/explicit conversions. Thus, the library defined type Half
needs to be defined in a manner that does not result in any breaking changes if half
becomes a reality. Specifically, we needed to be careful about adding operators to the Half
type. Implicit conversions to float/double
could lead to potential breaking changes if language support is added. On the other hand, having a Float/Double
property on the Half
type felt less than ideal. In the end, we decided to add explicit operators to convert to/from float/double
. If C# does add support for half
, no user code would break, since all casts would be explicit.
Adoption
Half
will find its way into many codebases. The Half
type plugs a gap in the .NET ecosystem and we expect many numerics libraries to take advantage of it. In the open source arena, ML.NET is expected to start using Half
, the Apache Arrow project’s C# implementation has an open issue for it and the DataFrame library tracks a related issue here. As more intrinsics are unlocked in .NET for x86 and ARM processors, we expect that computation performance with Half
can be accelerated and result in more efficient code!
That’s pretty cool. Not doing embedded systems programming myself, I don’t really see any practical use for it, but if other folks can find ways to use it, that’s great! One question though: why is it
Half
instead ofhalf
as are all the other numeric primitives?Why not just implement half in the CLR as opposed to the framework? I’m guessing time constraints, but then how does Half work without CLR support?
What is the main efficiency advantage of having a 16-bit floating point data type? Can you give an example of where it could improve the efficiency of code?
There are many scenarios in training ML models where the 16-bit float is sufficiently accurate. The main advantage going from 32-bit to 16-bit is the reduced memory requirements. Models in ML can get very large and have many coefficients. If the coefficients go from 32-bit to 16-bit, they will take up half the space. This means more values on each cache line, more values in registers, less need to go to main memory. Smaller programs...
Provided the hardware can accelerate a given operation and the accelerated version outperforms the equivalent software fallback, then the hardware option will likely be used.
فر اخوان
This is good, except for the naming: Wouldn't it be clearer to just call these types Float16, Float32, Float64 ?
Also why stop here. There's an obvious need for a Float128.
And on the other side, a Float8 type is still heavily used in telephony: Despite a very different vocabulary used in their specs, the PCM codes used in digital telephony are just 8-bit floating point numbers, with 1 sign bit, 3 exponent bits, and 4...
Float16, Float32, and Float64 might have been clearer names, but Single and Double are the names chosen 20 years ago and consistency generally outweighs any other changes that might be made.
I responded to the Float128 idea here: https://devblogs.microsoft.com/dotnet/introducing-the-half-type/#comment-7104
Namely there is not a lot of hardware acceleration for it today and the people who want Float128 may also be the same group that wants "arbitrary precision floating-point", which may be better solved by a different...
Could we add an alias for float => whole 🙂
🙂
Will the half type be available in Visual Basic .NET? If not, i suggest to add it.
The Half type is only supported by the framework today so it should be as useable as any other framework type.
Any language support would be up to the respective language team and can be requested at https://github.com/dotnet/vblang/issues/new
This is unexpected and awesome!
What happens when the CPU doesn’t support HF halfs?
Is software emulation entirely used or will they get processed as floats and stored back into 16 bit halfs for some level of hardware acceleration?
There is no hardware acceleration in .NET 5 and that is instead something we will be looking at for a future version of .NET.
Provided the hardware can accelerate a given operation and the accelerated version outperforms the equivalent software fallback, then the hardware option will likely be used.
Nice work, but having Half alongside half would be much better story, otherwise we might never get half as priorities come and go.
It’s great, that Half is being added. But recently bfloat16 (https://en.wikipedia.org/wiki/Bfloat16_floating-point_format) also became popular for neural network workloads.
In general, it would be great to have support for generic math in .NET.
bfloat16 is also on our radar (although we don’t currently have a tracking issue). It may be added in the future provided the appropriate demand and use cases for it can be provided.
Nice work. I have a lot of questions ;-)
1. Will the Half be a primitive type (also visible in TypeCode.Half)?
2 What will be the CLR type (Single, Double, ...?)
3. Will existing classes be updated (like Vector and Intrinsics)
4. Will there be a Math for it?
5. Is the type blittable to a CUDA half-float?
"We expect that Half will find its way into many codebases"
The biggest problem still is that there...
I will empathize with the lack of generic math support being quite a pain. The upcoming C# Shapes feature will hopefully take care of that soon.
We used to have a generic math lib that just got unwieldy and was ultimately too limited to apply generally. We dropped it and just started using dynamic in most cases instead and that has worked out a lot better. Yeah, it can be a bit slower, but dynamic is...
@Mike: Unfortunately performance is of high relevance for use. We are having large datasets that need to be processed. So we are trying to get every piece of performance we can get. Most of our math is vectorized, but since Vector and all other intrinsics only work on a multiple of the register size you still need a loop for the remaining data.
I really hope that Shaped will solve it, but it's been a long...
Non-official answer based on my knowledge:
1. Probably not. TypeCode and others are very difficult to extend.
2. System.Half
3. Probably, but I think built-in arithmetic support should be a precondition.
4. Should be also after arithmetic support.
5. Probably, if they are both IEEEE754.
Huo's points are accurate for the moment, so I won't add to it :) I do empathize with the explosion of code for math. I faced the same issue while writing the DataFrame library. I ended up creating a text template in the end. That may be an option for you if you aren't doing it already? Shapes is the most elegant solution to this problem though.
@Huo Yaoyuan. Thanks for the Info.