In the world of computing, numbers are a crucial part of the language that computers use to communicate and process information. However, unlike in our everyday lives where we use the decimal numbering system (base-10), computers use a variety of different numbering systems, each with its unique characteristics and advantages. In this blog post, we will explore the most common numbering systems used in computing.
- Decimal Number System (Base-10): The decimal number system is the most commonly used numbering system in our everyday lives, and it consists of ten digits from 0 to 9. In this system, each digit represents a power of 10, starting from 0 to the leftmost digit and increasing by 1 as we move from left to right. For example, the number 1234 in decimal represents (1 x 1000) + (2 x 100) + (3 x 10) + (4 x 1).
- Binary Number System (Base-2): The binary number system is the most fundamental numbering system used in computing, and it consists of only two digits, 0 and 1. In this system, each digit represents a power of 2, starting from 0 to the rightmost digit and increasing by 1 as we move from right to left. For example, the number 1011 in binary represents (1 x 2^3) + (0 x 2^2) + (1 x 2^1) + (1 x 2^0), which is equal to 11 in decimal.
Binary numbers are used extensively in computer hardware and programming because they can be represented using simple electronic circuits, and they are easy to manipulate using logical operations.
- Octal Number System (Base-8): The octal number system consists of eight digits, from 0 to 7, and each digit represents a power of 8. In this system, the leftmost digit represents the highest power of 8, and the rightmost digit represents the lowest power of 8. For example, the number 347 in octal represents (3 x 8^2) + (4 x 8^1) + (7 x 8^0), which is equal to 231 in decimal.
Octal numbers were widely used in the early days of computing when the size of computer memory was limited, and programmers needed to conserve memory space.
- Hexadecimal Number System (Base-16): The hexadecimal number system is widely used in computer programming and digital electronics, and it consists of 16 digits, from 0 to 9 and A to F, where A to F represent the decimal values 10 to 15, respectively. Each digit in this system represents a power of 16, starting from 0 to the rightmost digit and increasing by 1 as we move from right to left. For example, the number A5 in hexadecimal represents (10 x 16^1) + (5 x 16^0), which is equal to 165 in decimal.
Hexadecimal numbers are commonly used in computer programming because they can represent large numbers using fewer digits than decimal numbers, and they are easier to read and remember than binary numbers.
Binary operations involve manipulating binary numbers, which are made up of only two digits, 0 and 1. In this blog post, we will explore the most common binary operations used in computing.
- AND Operation: The AND operation is a logical operation that takes two binary inputs and produces a binary output. The output is 1 only if both inputs are 1; otherwise, the output is 0. For example, the AND operation of 1010 and 1101 would be 1000. In computing, the AND operation is commonly used in Boolean algebra to test whether a certain condition is true or false.
- OR Operation: The OR operation is another logical operation that takes two binary inputs and produces a binary output. The output is 1 if at least one input is 1; otherwise, the output is 0. For example, the OR operation of 1010 and 1101 would be 1111. In computing, the OR operation is commonly used to check if either one of two conditions is true.
- NOT Operation: The NOT operation is a unary operation that takes a single binary input and produces a binary output. The output is the complement of the input, i.e., 1 if the input is 0 and 0 if the input is 1. For example, the NOT operation of 1010 would be 0101. In computing, the NOT operation is commonly used to invert a bit of a binary number.
- XOR Operation: The XOR operation is another logical operation that takes two binary inputs and produces a binary output. The output is 1 if the inputs are different; otherwise, the output is 0. For example, the XOR operation of 1010 and 1101 would be 0111. In computing, the XOR operation is commonly used to compare two binary numbers or to generate random numbers.
- Shift Operations: Shift operations are another type of binary operation used in computing, which involves shifting the binary digits of a number to the left or right by a certain number of bits. There are two types of shift operations: logical and arithmetic shifts. In logical shifts, the bits are shifted, and the empty bits are filled with zeros. In arithmetic shifts, the bits are shifted, and the empty bits are filled with the sign bit (the leftmost bit) of the original number.
Bits, Bytes & Words
Negative numbers are stored using Twos Complement. It’s worth understanding this first before we move on
Here is a handy guide on some Bitwise tricks
Everything starts with a Hello World example and here is an ARM 64bit example
// Assembler program to print "Hello World!"
// to stdout.
// X0-X2 - parameters to linux function services
// X16 - linux function number
.global _start // Provide program starting address to linker
.align 2 // OSX prefers alignment on 64bit boundary
// Setup the parameters to print hello world
// and then call Linux to do it.
_start: mov X0, #1 // 1 = StdOut
adr X1, helloworld // string to print
mov X2, #13 // length of our string
mov X16, #4 // MacOS write system call
svc 0 // Call linux to output the string
// Setup the parameters to exit the program
// and then call Linux to do it.
mov X0, #0 // Use 0 return code
mov X16, #1 // Service command code 1 terminates this program
svc 0 // Call MacOS to terminate the program
helloworld: .ascii "Hello World!\n"
WARNING: Make sure the make file uses a tab and not spaces, otherwise you will get the wonderful error message “makefile:2: *** missing separator. Stop.”
ld -o HelloWorld HelloWorld.o \
-syslibroot `xcrun -sdk macosx --show-sdk-path` \
-e _start \
as -o HelloWorld.o HelloWorld.s
To get started you need to install the latest version of XCode, this will give you access to the assembler ‘as’ and the linker ‘ld’ as well as ‘make’, a build tool that takes make files which are a set of instruction to build a binary from the raw source. This will produce an executable called HelloWorld which we can run like any other program
Using the Debugger
Now lets run the program under the lldb debugger
# Load the app into the lldb debugger
$ lldb HelloWorld
(lldb) target create "HelloWorld"
Current executable set to '/Users/keith/Documents/Development/arm/HelloWorld' (arm64).
# Now run the app
Process 14745 launched: '/Users/keith/Documents/Development/arm/HelloWorld' (arm64)
Process 14745 exited with status = 0 (0x00000000)
There are really only 3 types of operations an Arm processor can do
- Load some data in memory into a register
- Load some data in a register into memory
- Perform an arithmetic/logic operation on a Register
There are 31 general-purpose registers in the arm64 architecture, each of which is 64 bits wide. These registers are used to hold data, addresses, and other values that are used during program execution. The general-purpose registers are labelled X0 through X30, with X31 reserved as the stack pointer.
64 bit registers: X0 – X31
32 bit registers: W0 – W31
In addition to the general-purpose registers, the arm64 architecture includes a number of other registers that are used for specific purposes. Some of the most important of these include:
- The program counter (PC) register: This register holds the memory address of the next instruction to be executed.
- The stack pointer (SP) register: This register holds the memory address of the top of the stack.
- The link register (LR): This register holds the return address of a subroutine call.
- The status flags register (PSTATE): This register holds various flags and settings related to the processor state, as we discussed in the previous post.
There are also a number of other special-purpose registers in the arm64 architecture, including registers used for floating-point and SIMD operations, as well as registers used for managing system calls and exception handling. We will address these later in the tutorial
This tutorial is currently under development and more content will be added over time