Skip to main content

[C++] enum class

Traditional C++ enum had several issues. To solve these problems, C++11 introduced a new feature called enum class. In this article, I will examine the problems with the traditional enum and how they are solved with enum class.

First, traditional enum could not be forward-declared. The reason was that if the values in the enumerator were unknown, it was impossible to determine their size. However, enum class is treated as int if an underlying type is not specified, assigning values outside the range of an int will raise a compilation error. If you want to use values outside the range of an int, you need to specify the underlying type.

Another problem with traditional enum was that the scope of enumerator names was not limited. Let's see the following example.

Here, we try to represent the results of IO and Parse functions with enums. However, this code will not compile because the Error and Ok of IOResult conflict with those of ParseResult. To resolve this issue, you can change the enumerator names, or use namespaces.

However, with enum class, the names of enumerators are limited to the scope of the enum class, so there is no need for such verbose code.

Most importantly, the biggest problem with traditional enum was that they were weakly typed and could be implicitly converted to integer types. However, enum class do not allow implicit conversion to integer types. If you try to use an enum class as an int, you will encounter a compilation error. You need to use static_cast to explicitly cast it.

As explained above, traditional enum cannot be forward-declared, their enumerator names are not limited in scope, and they can be implicitly converted to integer types. For now, enum class are the correct approach in most case.

Note: This article is a translation of a Korean post written in 2015. If you want to read the original, please refer to this link.


Popular posts from this blog

Type Conversion in Rust

Type conversion is not special in Rust. It's just a function that takes ownership of the value and returns the other type. So you can name convert functions anything. However, it's a convention to use as_ , to_ , and into_ prefixed name or to use from_ prefixed constructor. From You can create any function for type conversion. However, if you want to provide generic interfaces, you'd better implement the From trait. For instance, you should implement From<X> for Y when you want the interface that converts the X type value to the Y type value. The From trait have an associated function named from . You can call this function like From::from(x) . You also can call it like Y::from(x) if the compiler cannot infer the type of the destination type. Into From have an associated function, it makes you be able to specify the destination type. It's why From has an associated function instead of a method, but on the other hands, you cannot use it as a me

Do not use garbage collection to catch memory leak

Garbage collection is a technique that automatically releases unnecessary memory. It's very famous because many programming languages adopted garbage collection after John McCarthy implemented it in Lisp. However, there are a few people who misunderstand what garbage collection does. If you think garbage collection prevents a memory leak, unfortunately, you are one of them. Garbage collection cannot prevent a memory leak. There is no way to avoid all memory leaks if you are using Turing-complete language. To understand it you should know what a memory leak is. Wikipedia describes a memory leak as the following: a type of resource leak that occurs when a computer program incorrectly manages memory allocations in such a way that memory which is no longer needed is not released. Briefly, a memory leak is a bug that doesn't release a memory that you don't use. So it is first to find the memory which will not be used in order to detect memory leaks. Unfortunately, it i

Handling Terminal Output with Termios

As I explained in the previous article , Unix-like operating systems, for instance, OS X and Linux, use LF (line feed, 0x0A , \n ) as the newline character which moves the cursor to the beginning of the next line. However, the standard-defined behavior of LF only moves the cursor down to the next line, not to the beginning of the line. This difference is acceptable if files are always accessed through operating system-dependent applications. However, Unix-like systems have no distinction between files and input/output; this difference can be problematic when file and process input/output interact. To handle this difference, a terminal emulator post-processes the output appropriately. The c_oflag in the termios structure defined by the POSIX.1 standard controls this. The c_oflag is a flag for what post-processing the terminal should perform before displaying the received characters. The most important flag in c_oflag is OPOST . This flag determines whether or not to post-pro