Why are `int32`s all over the place where `uint32`s are apparently more appropriate (number of *, port, etc.)?

In the Kubernetes API and any other Kubernetes-related interface definitions written in Go, why int32s are always used as the types of values that we know takes only non-negative integers such as number of *, port, etc, in the case uint32s are more appropriate?

Cluster information:

Kubernetes version:
Cloud being used: (put bare-metal if not on a public cloud)
Installation method:
Host OS:
CNI and version:
CRI and version:

You can format your yaml by highlighting it and pressing Ctrl-Shift-C, it will make your output easier to read.

Because API deserialization happens before validation.

Consider a number field that has to be greater than 0 and less than 65536 (for example, a network port). Seems like a great place to use uint16, right?

What happens if the user tries to send -1 or 65537 or 8675309? Remember, we accept JSON which is really just a string.

In this case the API will (hopefully!!) get some error while deserializing, and we don’t get to control the error message.

If we deserialize into a larger type, we can validate it how we need and return errors that we like.

So why int32 and not int? The size of int is platform defined. It seems unlikely that we will ever put k8s on a platform with int being 16 bits, but in this case it costs us approximately nothing to be explicit. :slight_smile:

2 Likes

Thank you for answering my question which is quite informative!

I have got more question that wherever I can suppose that signedness is solely from JSON format-related situation, can I also suppose that Kubernetes system actually uses the half-sized unsigned integers: int32uint16, int64uint32? In another word, did Kubernetes team mechanically double the size of integers when they encoded/exposed their internally using types into/to JSON API?

Thanks!

When we see an integer in the API we usually ask “does it need to be 64 bits?” and if not, use int32.