A number base, also called a radix, defines the number of unique digits (including zero) used to represent values in a positional numeral system. The base determines how digit positions relate to powers of the radix. In our everyday decimal system (base 10), each position represents a successive power of 10: ones, tens, hundreds, thousands, and so on.
The concept of positional notation is one of the most important inventions in the history of mathematics. Rather than requiring unique symbols for every possible value (as Roman numerals do), positional systems use a small set of digits combined with place value to represent any number, no matter how large.
For any base b, a number is represented as a sequence of digits where each digit d at position i (counting from 0 on the right) contributes the value d * b^i to the total. For example, in base 10, the number 347 equals 3*10^2 + 4*10^1 + 7*10^0 = 300 + 40 + 7 = 347. The same principle applies to every other base.
While humans gravitated toward base 10 (likely because we have 10 fingers), computers fundamentally operate in base 2 (binary) because digital circuits have two states: on and off, high voltage and low voltage, 1 and 0. This makes binary the native language of all digital computation.
Four number systems dominate the computing world, each serving distinct purposes and offering unique advantages for different tasks.
Binary uses only two digits: 0 and 1. Every piece of data in a computer -- text, images, video, programs -- is ultimately represented in binary. A single binary digit is called a bit, and 8 bits form a byte. Binary is essential for understanding how computers store and process information at the hardware level.
Binary numbers can grow long quickly. The decimal number 255, for instance, requires 8 binary digits: 11111111. This verbosity is why developers rarely work directly in binary, preferring more compact representations like hexadecimal.
Binary: 1010 1100
Decimal: 172
Meaning: 128 + 0 + 32 + 0 + 8 + 4 + 0 + 0 = 172
Octal uses digits 0 through 7. Each octal digit maps precisely to 3 binary bits, making it a convenient shorthand for binary values. Octal was historically popular in computing systems that used word sizes divisible by 3 (such as 12-bit, 24-bit, and 36-bit architectures).
Today, octal is most commonly encountered in Unix/Linux file permissions. The chmod command uses octal notation to set read (4), write (2), and execute (1) permissions. A permission like 755 means the owner has read+write+execute (7), while group and others have read+execute (5).
Octal: 0755
Binary: 111 101 101
Meaning: rwx r-x r-x
Decimal is the number system humans use daily. It employs digits 0 through 9, and each position represents a power of 10. While computers do not natively "think" in decimal, virtually all user-facing interfaces display numbers in decimal form.
In programming, decimal is the default representation for integer and floating-point literals. When you write 42 in most programming languages, the compiler or interpreter treats it as a base-10 value and converts it to binary internally.
Hexadecimal (hex) uses 16 symbols: digits 0-9 and letters A-F (where A=10, B=11, C=12, D=13, E=14, F=15). Each hex digit maps to exactly 4 binary bits (a nibble), making hexadecimal the most common compact representation of binary data.
Hex is ubiquitous in programming: memory addresses (0x7FFE), color codes (#FF5733), byte values in hex dumps, MAC addresses (AA:BB:CC:DD:EE:FF), Unicode code points (U+1F600), and many more. A single byte (8 bits) is always represented by exactly 2 hex digits, from 00 to FF.
Hex: 0x1A3F
Binary: 0001 1010 0011 1111
Decimal: 6719
Converting between number bases is a fundamental skill in computer science. There are systematic algorithms for every type of conversion, and understanding them deepens your grasp of how numbers work.
To convert a number from any base to decimal, multiply each digit by the base raised to the power of its position (starting from 0 on the right), then sum all the products. This is known as the polynomial expansion method.
Hex 2F to decimal:
2 * 16^1 = 32
F * 16^0 = 15
Total = 47
Binary 110101 to decimal:
1*32 + 1*16 + 0*8 + 1*4 + 0*2 + 1*1 = 53
Octal 157 to decimal:
1*64 + 5*8 + 7*1 = 111
To convert a decimal number to any target base, use the repeated division method: divide the number by the target base, record the remainder, and repeat with the quotient until the quotient reaches zero. The remainders, read from bottom to top (last to first), form the result.
Decimal 156 to hex:
156 / 16 = 9 remainder 12 (C)
9 / 16 = 0 remainder 9 (9)
Result: 0x9C
Decimal 42 to binary:
42 / 2 = 21 r 0
21 / 2 = 10 r 1
10 / 2 = 5 r 0
5 / 2 = 2 r 1
2 / 2 = 1 r 0
1 / 2 = 0 r 1
Result: 101010
Because 8 and 16 are powers of 2 (8 = 2^3, 16 = 2^4), conversions between binary and octal or hex are particularly simple. You do not need to go through decimal as an intermediate step.
Binary to Hex: Group binary digits into sets of 4 from right to left, padding with leading zeros if needed. Convert each group to its hex equivalent.
Binary: 1101 0110 1011
Hex: D 6 B
Result: 0xD6B
Binary to Octal: Group binary digits into sets of 3 from right to left, padding with leading zeros if needed. Convert each group to its octal digit.
Binary: 110 101 101 011
Octal: 6 5 5 3
Result: 06553
Hex to Binary: Replace each hex digit with its 4-bit binary equivalent. This is the reverse of the binary-to-hex process.
Hex: A 3 F
Binary: 1010 0011 1111
Result: 101000111111
Binary arithmetic follows the same rules as decimal arithmetic, but with only two digits. Understanding binary math is crucial for low-level programming, digital circuit design, and computer architecture.
Binary addition follows four simple rules: 0+0=0, 0+1=1, 1+0=1, and 1+1=10 (0 with a carry of 1). When a carry propagates, you add it to the next column, just as in decimal addition.
1 0 1 1 (11)
+ 0 1 1 0 ( 6)
---------
1 0 0 0 1 (17)
Carries: 1 1 1
Binary subtraction uses borrowing, similar to decimal subtraction. When subtracting 1 from 0, you borrow 1 from the next higher bit, which gives you 10 (2 in decimal) in the current position. In practice, computers use two's complement representation to perform subtraction via addition.
Binary multiplication is simpler than decimal multiplication because each partial product is either the multiplicand (when multiplied by 1) or zero (when multiplied by 0). The partial products are shifted left by one position for each bit and then summed.
1 0 1 (5)
x 1 1 (3)
-------
1 0 1 (5 x 1)
1 0 1 0 (5 x 2, shifted left)
-------
1 1 1 1 (15)
Hexadecimal is arguably the most important number base for software developers after decimal. Its compact representation of binary data makes it indispensable for a wide range of programming tasks.
Web colors are specified using hex triplets that represent red, green, and blue (RGB) channel intensities. Each channel uses 2 hex digits (1 byte), ranging from 00 (0, no intensity) to FF (255, full intensity). For example, #FF0000 is pure red, #00FF00 is pure green, and #0000FF is pure blue. The shorthand #F00 expands to #FF0000.
Memory addresses are displayed in hexadecimal because it provides a compact yet precise representation of the underlying binary address. A 64-bit address like 0x00007FFE12345678 is far more readable than its 64-digit binary equivalent. Debuggers, crash logs, and system monitors all use hex addresses.
Most programming languages support hex literals with a 0x prefix. This is especially useful when working with bit masks, flags, hardware registers, or protocol constants where the binary structure matters.
// JavaScript
const mask = 0xFF00;
const color = 0x1A2B3C;
// Python
permissions = 0o755 # Octal
flags = 0b10110100 # Binary
address = 0xDEADBEEF # Hex
// C/C++
#define STATUS_REG 0x04
#define ENABLE_BIT 0x80
unsigned char byte = 0xA5;
Hex dumps display file or memory contents as a sequence of hex byte values alongside their ASCII representation. Tools like xxd, hexdump, and od produce hex dumps that are essential for debugging binary file formats, network protocols, and data corruption issues.
$ xxd sample.bin | head -3
00000000: 4865 6c6c 6f20 576f 726c 640a 5468 6973 Hello World.This
00000010: 2069 7320 6120 7465 7374 2066 696c 650a is a test file.
00000020: 666f 7220 6865 7820 6475 6d70 0a for hex dump.
Bitwise operations manipulate individual bits within integer values. They are among the fastest operations a CPU can perform and are fundamental to systems programming, cryptography, graphics, and embedded development.
The AND operation produces 1 only when both input bits are 1. It is commonly used for masking -- extracting specific bits from a value. For example, to isolate the lower 4 bits of a byte: value & 0x0F.
1100 1010 (0xCA)
& 0000 1111 (0x0F)
-----------
0000 1010 (0x0A) -- lower nibble extracted
The OR operation produces 1 when either or both input bits are 1. It is used to set specific bits without affecting others. For example, to set bit 7: value | 0x80.
0100 0011 (0x43)
| 1000 0000 (0x80)
-----------
1100 0011 (0xC3) -- bit 7 is now set
The XOR (exclusive OR) operation produces 1 when the input bits differ. XOR has a unique property: applying it twice with the same value restores the original. This makes XOR useful for toggling bits, simple checksums, and swap operations without a temporary variable.
1010 1010 (0xAA)
^ 1111 0000 (0xF0)
-----------
0101 1010 (0x5A) -- upper nibble toggled
The NOT operation (bitwise complement) flips every bit: 0 becomes 1 and 1 becomes 0. In two's complement representation, ~x equals -(x+1). NOT is used in creating inverse masks and two's complement negation.
Left shift (<<) moves bits toward higher positions, effectively multiplying by powers of 2. Right shift (>>) moves bits toward lower positions, dividing by powers of 2. These operations are widely used for performance optimization, bit field manipulation, and protocol encoding.
5 << 3 = 40 (5 * 2^3 = 5 * 8)
80 >> 2 = 20 (80 / 2^2 = 80 / 4)
// Extracting bits 4-7 from a byte:
(value >> 4) & 0x0F
Several common patterns appear repeatedly in systems programming:
// Check if bit n is set
bool isSet = (value >> n) & 1;
// Set bit n
value |= (1 << n);
// Clear bit n
value &= ~(1 << n);
// Toggle bit n
value ^= (1 << n);
// Check if power of 2
bool isPow2 = value && !(value & (value - 1));
// Count trailing zeros (find lowest set bit)
int pos = __builtin_ctz(value); // GCC built-in
Representing negative numbers in binary requires a convention since there is no "minus sign" in binary. Several methods exist, but two's complement has become the dominant standard used by virtually all modern processors.
The simplest approach uses the most significant bit (MSB) as a sign bit: 0 for positive, 1 for negative. The remaining bits represent the magnitude. While intuitive, sign-magnitude has two representations for zero (+0 and -0) and requires special handling for addition and subtraction.
Two's complement is the standard method for representing signed integers in modern computers. To negate a number, invert all bits (one's complement) and add 1. This representation has a single zero, and addition/subtraction work identically for signed and unsigned values, simplifying hardware design.
8-bit two's complement range: -128 to +127
5 in binary: 0000 0101
-5 in binary: 1111 1011 (invert: 1111 1010, add 1: 1111 1011)
Verify: 0000 0101 + 1111 1011 = 1 0000 0000 (overflow discarded = 0)
In two's complement, the MSB still acts as a sign indicator (1 = negative), but the value calculation differs from sign-magnitude. For an 8-bit value, the MSB position has weight -128 rather than simply indicating sign.
Number base conversion extends beyond integers to fractional numbers. Converting fractional parts between bases uses a different algorithm than the integer part: multiply the fractional part by the target base, record the integer part of the result, and repeat with the remaining fractional part.
Decimal 0.6875 to binary:
0.6875 * 2 = 1.375 --> 1
0.375 * 2 = 0.75 --> 0
0.75 * 2 = 1.5 --> 1
0.5 * 2 = 1.0 --> 1
Result: 0.1011
Some decimal fractions cannot be represented exactly in binary. The classic example is 0.1, which becomes an infinitely repeating binary fraction (0.0001100110011...). This is the root cause of floating-point precision issues that every programmer encounters: 0.1 + 0.2 !== 0.3 in JavaScript and most other languages using IEEE 754 floating-point.
The IEEE 754 standard defines how floating-point numbers are stored in binary. A 32-bit float consists of 1 sign bit, 8 exponent bits, and 23 mantissa (significand) bits. A 64-bit double has 1 sign bit, 11 exponent bits, and 52 mantissa bits. Understanding this format in binary and hex is essential for debugging precision issues.
Float 3.14 in IEEE 754 (32-bit):
Hex: 0x4048F5C3
Binary: 0 10000000 10010001111010111000011
Sign: 0 (positive)
Exp: 128 - 127 = 1
Mantissa: 1.10010001111010111000011 (with implicit leading 1)
Number base conversion is not just an academic exercise -- it is a practical skill used daily across many domains of software development and IT.
IP addresses are fundamentally 32-bit (IPv4) or 128-bit (IPv6) binary numbers. Understanding binary is essential for subnetting, calculating network ranges, and applying subnet masks. An IP address like 192.168.1.0/24 means the first 24 bits define the network, which becomes obvious when viewed in binary.
Character encodings like ASCII, UTF-8, and UTF-16 map characters to numeric values that are stored in binary. Understanding hex representation helps when debugging encoding issues. For example, the UTF-8 encoding of the emoji character U+1F600 requires 4 bytes: F0 9F 98 80.
Hash values (MD5, SHA-256) and encryption keys are binary data typically displayed in hexadecimal. A SHA-256 hash produces 256 bits (32 bytes), displayed as 64 hex characters. Understanding hex is essential for verifying file integrity, analyzing security protocols, and debugging cryptographic implementations.
Binary file formats (PNG, PDF, ELF, PE) begin with "magic bytes" that identify the format. These are specified and inspected in hexadecimal: PNG files start with 89 50 4E 47, PDF files with 25 50 44 46 (%PDF), and ZIP files with 50 4B (PK).
When programming microcontrollers or writing assembly code, you frequently work with hardware registers, port addresses, and bit fields -- all specified in hex or binary. Understanding number bases is not optional in this domain; it is a fundamental requirement.
Our Number Base Converter makes it easy to convert numbers between binary, octal, decimal, hexadecimal, and any custom base from 2 to 36. Enter a number in any base and instantly see the equivalent representation in all other bases.
The tool supports integers of any size, shows step-by-step conversion breakdowns, and provides bit-level visualization for binary values. Whether you are debugging a network protocol, working with color codes, analyzing file headers, or studying computer architecture, this converter saves you time and eliminates manual calculation errors.
A number base (or radix) is the number of unique digits used to represent numbers in a positional numeral system. For example, base 10 (decimal) uses digits 0-9, base 2 (binary) uses 0 and 1, base 8 (octal) uses 0-7, and base 16 (hexadecimal) uses 0-9 and A-F.
To convert binary to decimal, multiply each bit by 2 raised to the power of its position (starting from 0 on the right), then sum all the results. For example, binary 1011 = 1x2^3 + 0x2^2 + 1x2^1 + 1x2^0 = 8 + 0 + 2 + 1 = 11 in decimal.
Hexadecimal is used because each hex digit maps exactly to 4 binary bits, making it a compact and human-readable representation of binary data. It is commonly used for memory addresses, color codes, byte values, MAC addresses, and debugging.
Bitwise operations work directly on the binary representation of numbers. Common operators include AND (&), OR (|), XOR (^), NOT (~), left shift (<<), and right shift (>>). They are used for flags, masks, permissions, performance optimization, and low-level programming.
Repeatedly divide the number by the target base and record the remainders. The remainders, read in reverse order, form the result in the new base.
Each octal digit corresponds to exactly 3 binary bits. Group binary digits into sets of 3 from right to left and convert each group to get the octal equivalent. For example, binary 101110 = 101 110 = 56 in octal.