It would be slower to read the value if you had to also do bitwise operations to get the value.
But you can also define your own bitfield types to store booleans packed together if you really need to. I would much rather that than have the compiler do it automatically for me.
The mistake was that they created a type that behaves like an array in every case except for bool, for which they created a special magical version that behaves just subtly different enough that it can break things in confusing ways.
Consider what the disassembly would look like. There’s no fast way to do it.
It’s also unnecessary since 8 bytes is a negligible amount in most cases. Serialization is the only real scenario where it matters. (Edit: and embedded)
Are you telling me that no compiler optimizes this? Why?
CPUs don’t read one bit a a time.
It would be slower to read the value if you had to also do bitwise operations to get the value.
But you can also define your own bitfield types to store booleans packed together if you really need to. I would much rather that than have the compiler do it automatically for me.
Well there are containers that store booleans in single bits (e.g.
std::vector<bool>
- which was famously a big mistake).But in the general case you don’t want that because it would be slower.
Why is this a big mistake? I’m not a c++ person
The mistake was that they created a type that behaves like an array in every case except for
bool
, for which they created a special magical version that behaves just subtly different enough that it can break things in confusing ways.Could you provide an example?
Consider what the disassembly would look like. There’s no fast way to do it.
It’s also unnecessary since 8 bytes is a negligible amount in most cases. Serialization is the only real scenario where it matters. (Edit: and embedded)
In embedded, if you are to the point that you need to optimize the bools to reduce the footprint, you fucked up sizing your mcu.