Representation Theory of the Square: D4, Subgroups, and Decomposition

This is essay is meant to be a semi-formal, intuitive introduction to representation theory - the mathematical study of describing symmetries. For more mathematically rigorous treatments, see the further readings section.

We introduce representation theory through the concrete example of a square's symmetries. We describe these symmetries through linear algebra via matrices, develop some basic definitions (groups, fields, vector spaces, representations, subrepresentations, irreducibility), write down explicit matrix models for the dihedral group D4 (the symmetries of the square), go over restrictions to subgroups (rotations C4, reflections C2), and what a full character table of D4 looks like.

1. Intuitive Overview: Symmetry as Linear Action

Symmetries in mathematics are transformations that leave an object the same aka indistinguishable from the pre-transformation state. For example take a square, rotate it 90 degrees, and it looks the same as before. Or draw a line through the middle and reflect the square across that line and it looks identical to the prior state. These rotations and reflections are what we call symmetries: Operations that transform an object into a state indistinguishable from the initial state. Or more mathematically correct: A symmetry is an automorphism of a structured object. (an isomorphism from the object to itself - see what an isomorphism is below). Representation theory takes these symmetries and encodes them into concrete transformations of a vector space: typically in form of matrices acting on coordinates. This encoding into linear transformations is the mathematical formalization of the intuitive concept of symmetry.

2. Basic Definitions

Before diving into representation theory, it is important to understand the following mathematical concepts.

2.1 Group

A group (G, ⋅G) is a set of elements g with a group-specific binary operation G (binary just means it takes 2 arguments and puts one value out). These elements can be numbers, coordinates or whatever, and the operation can be addition, multiplication or whatever. These elements and operation are set to satisfy the following group axioms:

If additionally the order of elements does not matter for the result of the operation, i.e. g1G g2 = g2G g1 for all g1, g2 ∈ G, then the group is called abelian or commutative (two different names for the same thing). Examples: integers under addition; nonzero reals under multiplication; symmetries of a polygon under composition (we will get there ;) ).

2.2 Field

A field (F, +F, ⋅F) is a set of elements f with two binary operations +F (field addition) and F (field multiplication) satisfying the following rules:

Structurally, a field combines two abelian groups tied together by the distributive law: (F, +F) is an abelian group with identity 0F, and (F \ {0F}, ⋅F) is an abelian group with identity 1F. 0F is excluded from the multiplicative group because multiplication by 0F collapses everything onto 0FF f = 0F for all f ∈ F. Thus this map is not bijective, and no element f can satisfy 0FF f = 1F, which is also why a multiplicative inverse for 0F does not exist (this is why you cannot divide by 0F - as you might remember from school).

Examples: real numbers, complex numbers, rational numbers, and finite fields like 𝔽2 (the field with two elements 0 and 1).

2.3 Vector Space

Now getting onto vector spaces: A Vector Space V is a concept built on top of a field F. A vector space is comprised of things called vectors, equipped with a vector addition operation +V and a scalar multiplication operation V (multiplying a vector by a scalar from F), fulfilling the axioms below.

The standard way to express vectors are in terms of ordered tuples Fn: (f1, f2, ..., fn) of field elements, where addition and scalar multiplication act component-wise. Ordered means that the order of elements matters (1,2,3) is not the same as (2,1,3) and both are vectors in 3. Think of such a tuple for intuition, the abstract definition allows for more generalized constructs to be vectors (e.g., the set of all polynomials of degree at most n are also a vector space).

Now back to the tuple model with v1 = (f1, f2, f3, ...) and v2 = (h1, h2, h3, ...) (where f1, f2, f3, ... ∈ F and h1, h2, h3, ... ∈ F are field elements):

Vector addition: v1 +V v2 = (f1 +F h1, f2 +F h2, f3 +F h3, ...)

Scalar multiplication: For a scalar c ∈ F, c ⋅V v1 = (c ⋅F f1, c ⋅F f2, c ⋅F f3, ...)

The vector space is defined to have the following properties:

Examples:

Note that in the context of vector spaces we often call the field elements scalars - probably because they scale the vector: 2 ⋅V (2,1,3) = (4,2,6).

2.4 Linear Maps and GL(V)

A linear map is a function that preserves the vector addition and scalar multiplication operations: Let V and W be vector spaces over the same field F. A function T: V → W is a linear map if it satisfies:

The general linear group GL(V) is the group of all invertible linear maps V → V (aka, all invertible square matrices V→V). This is itself a group under composition (composition means applying one linear map after another: (T1 ⋅ T2)(v) = T1(T2(v))).

2.5 Homomorphism

A homomorphism φ is a function from a group (G, ⋅G) to a group (H, ⋅H) (reminder the group operations can be addition, multiplication, etc.): φ: G → H The homomorphism φ has the property that for all g1, g2 ∈ G, φ(g1G g2) = φ(g1) ⋅H φ(g2). If a homomorphism is also a bijection, then it is called an isomorphism. An isomorphism essentially guarantees that each element in H is uniquely traceable back to exact element it came from in G. If (H, ⋅H) = (G, ⋅G) then the homomorphism is called an automorphism - it maps each element of a group back onto an element of the same group (this does not mean that the element is the same or not!). Note that an automorphism is specifically a bijective endomorphism (an isomorphism from a group to itself).

A general convention is to write products like g1G g2 or φ(g1) ⋅H φ(g2) as g1g2 and φ(g1)φ(g2), the multiplicative group operation is implicit. Meanwhile we will keep writing the additive operations with the explicit + sign.

Examples:

2.6 Representation

Now we are finally getting to representations: Imagine a group (G, ⋅G) as a collection of actions/transformations that can be performed on a structured object. A representation of that group (G, ⋅G) now is a way of describing group elements as structure-preserving transformations of the structured object. For example, permutations of a set (a permutation representation), linear transformations of a vector space (a linear representation), and many more. One key aspect is also how one chooses to mathematically represent said structured object (e.g., a set, a vector space, a matrix, a graph, etc.). For now we focus on linear representations, unless stated otherwise.

A (linear) representation of a group (G, ⋅G) over a field F is a homomorphism ρ: G → GL(V), where GL(V) is the group of all invertible linear operators on V. To put it simply: we are mapping a group of abstract actions onto a (not necessarily unique) mathematical description of those actions as matrices.

It assigns to each g ∈ G an invertible linear operator ρ(g) on V such that ρ(g1g2) = ρ(g1)ρ(g2), ρ(e) = I, and ρ(g-1) = (ρ(g))-1 (note: (ρ(g))-1 means the inverse of the matrix ρ(g), not the preimage under ρ; the latter would be written ρ-1(g) and may not even be well-defined if ρ is not injective). Note that invertibility of ρ(g) is guaranteed by the codomain GL(V) as it consists of invertible maps by definition. The pair (V, ρ) is called the representation of G on V. Imagine it as a function that takes in the group element, determines how the transformation mathematically is expressed and gives out a function that takes an element of the vector space as input and maps it according to its group element specific function onto the output vector space: (ρ(g)) (v): g ∈ G, v ∈ V.

2.7 Subrepresentation and Irreducibility

A subspace W ⊆ V is invariant (alias a subrepresentation) if ρ(g)(W) ⊆ W for all g ∈ G (note that ρ(g)(W) is the image of the set W under the linear map ρ(g), i.e. {ρ(g)(w) : w ∈ W}). And a representation is called irreducible if it has no non-trivial invariant subspaces (trivial subspaces being defined as: {0} and V). To put it simply: an irreducible representation is a representation that cannot be broken down into smaller, subrepresentations.

Intuitively, irreducible representations are the basic building blocks of representations - they cannot be broken down further.

2.8 Schur's Lemma

Schur's Lemma: Let G be a group (not necessarily finite), and let V and W be irreducible representations of G over a field F. If φ: V → W is a linear map that commutes with the group action (i.e., φ(g·v) = g·φ(v) for all g ∈ G, v ∈ V), then either:

Essentially: either everything gets mapped onto 0W, or every element in V is uniquely identifiable with an element in W via φ.

Furthermore, if V = W is finite-dimensional and F is algebraically closed*, then every linear map φ: V → V that commutes with the group action is a scalar multiple of the identity: φ = λ·idV for some λ ∈ F.

*A field F is algebraically closed if every non-constant polynomial with coefficients in F has a root in F. For example, ℂ is algebraically closed (this is the fundamental theorem of algebra), while ℝ is not (the polynomial x2 + 1 = 0 has no solution in ℝ).

2.9 Direct Sum

If (V1, ρ1) and (V2, ρ2) are representations, their direct sum is V1 ⊕ V2 with action ρ(g)(v1, v2) = (ρ1(g)v1, ρ2(g)v2). Conceptually, this puts two independent representations side by side - imagine it as two different axes in a 2D plane, similar to the (x,y) coordinate system you know from school - just with representations instead of numbers. In matrix terms: if ρ1(g) is an n × n matrix and ρ2(g) is an m × m matrix, then ρ(g) is the (n+m) × (n+m) block-diagonal matrix with ρ1(g) and ρ2(g) on the diagonal and zeros elsewhere (google block-diagonal matrices for more information).

A representation is called completely reducible when it can be written as a direct sum of irreducible representations. which is fancy for if you can fully detangle it into its basic building blocks (which is not always a given!).

2.10 Maschke's Theorem

Lastly let me introduce another important theorem: Maschke's Theorem: Let G be a finite group and let F be a field whose characteristic* does not divide |G| (the order of the group). Then every finite-dimensional representation of G over F is completely reducible: it decomposes as a direct sum of irreducible subrepresentations.

*The characteristic of a field is the amount of times you can add the multiplicative identity 1F to itself before it becomes the additive identity 0F and if this never happens, the characteristic is defined as 0. So: and have characteristic 0, and the finite field 𝔽2 has characteristic 2 because 1F+F1F=0F.

In this essay we work exclusively with the fields and (both of characteristic 0), hence Maschke's theorem applies to all finite groups without restriction. Therefore we will be able to decompose further representations (of this essay!)into all their irreducible building blocks.

3. The Symmetry Group of the Square: D4

Now lets dive into representations, how they work and what one can do with them. Lets look at the symmetries of a square centered at the origin in the plane (notice how we already implicitly chose restrictions). Its symmetries form the so called dihedral group D4 (This is just a fancy name don't worry about it):

The order of the group is the number of elements in the group, hence 8, because there are 4 rotations and 4 reflections. Throughout this section we use the left-action convention: matrices act on column vectors from the left (Mv and not vM for M ∈ GL(V) and v ∈ V), and in a product g1g2 the element g2 acts first, then g1. That is, ρ(g1g2)v = ρ(g1)(ρ(g2)v).

3.1 Elements and Relations

Lets formalize the symmetries:

You can understand these the following: e means nothing is changed - the neutral element aka the identity operation - , r is a rotation by 90 degrees, r2 is r ⋅ r two rotations by 90 degrees chained right after each other, which results in a rotation by 180 degrees, r3 is a 3 rotations by 90° chained together resulting in a rotation by 270 degrees, s is a reflection across the x-axis, sr2 is two rotations by 90 degrees followed by a reflection across the x-axis which is the same as a reflection across the y-axis, and similarly sr is a reflection across the line y = -x, sr3 is a reflection across the line y = x

Think about it and realize the following:

3.2 Standard 2×2 Real Matrix Representation

The interesting thing now is that we can CHOOSE how to represent this square, the choice of the field already matters. We can choose to represent the square as a vector space over the real numbers or the complex numbers . Over , the square lives in the plane 2 and the group acts on every point (x, y) in the plane(!), not just the four corner vertices. (Side note: one could also build a 4D permutation representation that shuffles the four vertices - which is a different, reducible representation we will see later.) Over , we can also work in 2, with the same 2×2 matrices acting on complex column vectors. The key difference is that over , matrices of finite order (like our rotation and reflection matrices) are always diagonalizable, because is algebraically closed and their minimal polynomials have distinct roots. This means representations that are irreducible over can sometimes split into smaller pieces over . (Side note: not every complex matrix is diagonalizable in general, e.g. Jordan blocks, but matrices arising from finite group representations over always are, because each ρ(g) has finite order, so its minimal polynomial divides xord(g) - 1 (where ord(g) is the order of g), which has distinct roots over .) I will get to this concretely in section 4. Now for this example we will choose . Then the representation of the square becomes a homomorphism ρ: D4 → GL(ℝ2). The group D4 acts on the square by rotating and reflecting it. The homomorphism ρ assigns a linear transformation of 2 described by a 2x2 Matrix M ∈ GL(V) to each element of D4. This 2x2 matrix acts on column vectors (x, y) in 2. We define our two generators explicitly to fix conventions:

and all further group elements are products of these two. The full set of eight matrices becomes:

These eight matrices are essentially fancy for "rotating and reflecting the square in 2" like the concept of the group D4 describes.

3.3 Alternative Rotation Models

Side note for clarification of preemptive questions:

4. Irreducibility of the Standard 2D Representation

Over 2, the standard 2D representation of D4 is irreducible. No nontrivial 1D subspace (line through the origin) is invariant under all eight matrices: the 90° rotation alone sends any chosen line to a different line, destroying invariance. (Side note: the origin {0} is always invariant under every linear map, since ρ(g)·0 = 0. This is why the definition of irreducibility explicitly excludes {0} and V as trivial invariant subspaces.)

Over , the same 2D representation becomes reducible for the rotation subgroup: rotation by 90° has eigenvalues i and -i with eigenvectors proportional to (1, -i) and (1, i) respectively, so the complexified rotation action splits into two 1D characters. Reflections swap these two complex lines, which is why the full D4-action remains irreducible over both and : the reflections "mix" the complex eigenspaces and prevent the 2D representation from splitting into 1D ones.

5. Restricting to Subgroups: C4 (Rotations) and C2 (One Reflection)

5.1 Rotation Subgroup C4 = ⟨r⟩

Restricting the standard 2D representation to the rotational group C4 (elements e, r, r2, r3):

5.2 Reflection Subgroup C2 = ⟨s⟩

Restricting to the subgroup generated by a single reflection C2 (e.g., s = mirror across the x-axis): ρ(s) = [[1, 0], [0, -1]].

Therefore, as a representation of C2, 2 splits as a direct sum of two 1D irreducible ones (trivial and sign*). The two 1D irreducible representations of C_2 = {e, s} are: - Trivial: both e and s act as +1. The x-axis (eigenspace of +1). - Sign: e acts as +1, s acts as -1. The y-axis (eigenspace of -1). *The sign representation is standard terminology: the non-identity element acts by multiplying with -1 in this case. (In the context of permutations it is slightly differently defined).

5.3 Subrepresentation vs. Restriction

A subrepresentation is an invariant subspace for the same group action. The axes are not invariant under all of D4 (a 90° rotation moves the x-axis to the y-axis), so they are not subrepresentations of D4. They become invariant only after restricting the acting group to a reflection subgroup (C2).

6. Character Table of D4

6.1 Conjugation Classes

A conjugation class of an element g in a group G is the set of all elements you get by conjugating g with every element of the group: Cl(g) = {h g h-1 : h ∈ G}. Two elements are in the same conjugation class if one can be transformed into the other this way. Intuitively this describes that the two elements do the same thing, just from a different perspective or axis.

You can get all conjugation classes by simply iterating through all h ∈ G for all g ∈ G:

In the case 0f of D4 the conjugation classes become:

The sizes always divide |G| = 8, and they sum to |G|: 1 + 1 + 2 + 2 + 2 = 8 = |G|.

6.2 Characters

The character χ of a representation is the function that assigns to each group element the trace (the sum of the diagonal elements which is the same as the sum of the eigenvalues - see linear algebra) of its matrix: χ(g) = tr(ρ(g)). Realize that conjugate matrices have the same trace (since tr(A B A-1) = tr(B) - see linear algebra), so a representation character χ is constant on conjugation classes. (Note: this is a one-way relation, same character does not imply same conjugation class.)

Characters compress a full matrix representation down to a single number per conjugation class. This is enough to completely distinguish irreducible representations: two irreducible representations are isomorphic (i.e. they are the same representation up to a change of basis/coordinates) if and only if they have the same character.

6.3 The Dimension Theorem (Sum-of-Squares Formula)

The Dimension Theorem also known as sum-of-squares formula is an essential insight into the structure of finite groups. It says that the order of a group is equal to the sum of the squares of the dimensions of its irreducible representations. More formally: Theorem (Dimension Theorem). Let G be a finite group with conjugation classes C1, ..., Ck, and let V1, ..., Vk be its irreducible representations. Then:

|G| = Σi=1k (dim Vi)2

This follows from decomposing the regular representation. The regular representation has dimension |G|, and by Maschke's theorem (section 2.10) it decomposes into its irreducible representations. Each irreducible Vi appears with multiplicity equal to its dimension, giving |G| = Σ (dim Vi) · (dim Vi) = Σ (dim Vi)2.

As we will see in the next section this for D4: 12 + 12 + 12 + 12 + 22 = 8 = |D4|. If the character table's dimensions (see section 6.4) were not to satisfy this, then something would be wrong.

6.4 The Character Table

When determining the conjugation classes of D4 you find 5 of them, the only way to have 5 irreps and hold the dimension theorem is by having four 1D (A1, A2, B1, B2) and one 2D (E) irreducible representation. The labelling is historical - A, B, E are so called Mulliken symbols from spectroscopy (see chemistry) - A denotes symmetric, B antisymmetric under principle rotation*, E doubly degenerate**, T triply degenerate**, etc.. *Principal rotation: The "highest order" rotation in the group. In D4, the rotations are e (order 1), r (order 4), r2 (order 2), r3 (order 4). The principal rotation is r (or equivalently r3) with order 4. "A" means: when you apply r, the 1D representation gives you +1. "B" means: when you apply r, you get -1. **A representation is called n-fold degenerate if it is an irreducible representation of dimension n > 1. Equivalently: the representation space has n basis vectors that are mixed by the group action and cannot be decomposed into smaller invariant subspaces.

Note: one can directly see the dimension via the sum of squares rule in 6.3 - the sum of the squared dimensions of the irreps equals the group order - because we know we have 5 irreps and 8 elements in the group, so the only valid sum of square combination is: 12 + 12 + 12 + 12 + 22 = 8 = |D4|.

Now getting to the character table:

e r2 r, r3 s, sr2 sr, sr3
Class size 1 1 2 2 2
A1 1 1 1 1 1
A2 1 1 1 -1 -1
B1 1 1 -1 1 -1
B2 1 1 -1 -1 1
E 2 -2 0 0 0

This is called a character table. It is used to classify the irreducible representations of a group as we will see in the next section.

Before going on it is important to thoroughly understand how this table comes to be. Each column corresponds to a conjugation class. Each row corresponds to an irreducible representation, meaning a different choice of how to represent parts of D4 as matrices.

In section 3.2 we chose to represent D4 as 2×2 matrices acting on 2. That is one representation, the row labeled E. But we could also choose to represent D4 as 1×1 matrices (single numbers), where each group element maps to +1 or -1. This is a separate homomorphism ρ: D4 → GL(ℝ1), targeting a different vector space. There are exactly four consistent ways to do this (constrained by the group multiplication table), giving the rows A1, A2, B1, B2.

These 1D representations are not subspaces or components of the 2D representation E. They are independent homomorphisms from the same group to different target spaces. Your brain might want to decompose the 2D representation into two 1D directions, and you can do that when restricting to a subgroup (section 5.2: restricting to C2, the 2D space splits into the x-axis and y-axis). But under the full D4, the 90° rotation mixes those axes. No 1D subspace survives all 8 symmetries, which is what irreducibility means.

Each cell in the table is the trace of the matrix assigned to that conjugation class by that representation. For example, in our 2D standard representation E: χE(e) = 2 (trace of the 2×2 identity matrix), χE(r2) = -2 (trace of [[-1,0],[0,-1]]), and all rotations by 90°/270° and all reflections have trace 0.

6.5 Using the Character Table: Decomposition

The character table is a central computational tool of representation theory. The key insight: characters of different irreducible representations are orthogonal to each other aka their inner product is zero. Characters of the same irreducible representation have inner product 1. This works analogously to orthogonal basis vectors in a vector space (see section 2.3): you can decompose any vector into components along orthogonal axes by taking dot products with each basis vector. Similarly, irreducible characters form an orthonormal basis of the space of class functions (functions that are constant on conjugation classes), and you can decompose any representation's character into irreducible characters using the inner product.

The inner product on characters is defined as:

⟨χi, χj⟩ = (1/|G|) Σg ∈ G χi(g) · χj(g)* = δij

with * defining the complex conjugation (which does nothing over ) and δij, the Kronecker delta, being 1 if i = j and 0 otherwise. This is the orthogonality of characters (!), and it follows from Maschke's theorem (section 2.10) and Schur's Lemma (section 2.8)!

With those tools one can now decompose representation into its irreducible subrepresentation-basis: the multiplicity (how many times it appears) of Vi in a representation with character χ is the inner product with the corresponding irreducible character:

mi = ⟨χ, χi⟩ = (1/|G|) Σg ∈ G χ(g) · χi(g)*

This is what we did by hand in sections 4 and 5: checking which subspaces are invariant under which group elements. The character table systematizes this. Instead of searching for invariant subspaces geometrically, you compute traces and use the inner product formula above - which is a lot easier and more efficient than checking all subspaces manually.

6.6 The Regular Representation: Where All Irreducibles Live

The regular representation is the 8-dimensional representation where D4 acts on itself by left multiplication (each of the 8 group elements is a basis vector). By the dimension theorem, it decomposes as:

A1 ⊕ A2 ⊕ B1 ⊕ B2 ⊕ E ⊕ E

Each 1D irreducible appears once (multiplicity = dimension = 1). The 2D irreducible E appears twice (multiplicity = dimension = 2). In block-diagonal form, the 8×8 matrix for each group element looks like:

[A1   0    0    0    0  0    0  0 ]
[ 0  A2   0    0    0  0    0  0 ]
[ 0   0   B1   0    0  0    0  0 ]
[ 0   0    0   B2   0  0    0  0 ]
[ 0   0    0    0   E11 E12  0  0 ]
[ 0   0    0    0   E21 E22  0  0 ]
[ 0   0    0    0    0  0   E11 E12]
[ 0   0    0    0    0  0   E21 E22]

The first four entries on the diagonal are the 1×1 blocks (single numbers +1 or -1) for A1, A2, B1, B2. Then come two independent 2×2 blocks, both containing the same E matrices (the 2×2 matrices from section 3.2), acting on two orthogonal 2D subspaces inside the 8D space. The two copies of E are the same representation, just living in different coordinates.

6.7 Example: Decomposing the Permutation Representation

In section 3.2 we mentioned that one could build a 4D permutation representation of D4 by shuffling the four vertices of the square. Each group element permutes the vertices, giving a 4×4 permutation matrix (each row and column has exactly one 1, the rest 0).

To decompose this into irreducibles, compute the character (trace of each permutation matrix). A permutation matrix has trace equal to the number of fixed points (vertices that stay put):

Now use the inner product formula from section 6.5 to read off the multiplicities. For each irreducible representation, we compute mi = (1/|G|) Σ (class size) · χperm(class) · χi(class), summing over all conjugation classes. To see where each number comes from, here is the full breakdown for A1:

Conjugation class Class size χperm χA1 size · χperm · χA1
{e} 1 4 1 1 · 4 · 1 = 4
{r2} 1 0 1 1 · 0 · 1 = 0
{r, r3} 2 0 1 2 · 0 · 1 = 0
{s, sr2} 2 2 1 2 · 2 · 1 = 4
{sr, sr3} 2 0 1 2 · 0 · 1 = 0

Sum = 4 + 0 + 0 + 4 + 0 = 8. Divide by |G| = 8. So mA1 = 1. The same procedure for each irreducible (replacing the χA1 column with the corresponding row from the character table) gives:

mA1 = (1/8)(4·1 + 0·1 + 0·1 + 2·2·1 + 0·2·1) = (4+4)/8 = 1

mA2 = (1/8)(4·1 + 0·1 + 0·1 + 2·2·(-1) + 0·2·(-1)) = (4-4)/8 = 0

mB1 = (1/8)(4·1 + 0·1 + 0·(-1) + 2·2·1 + 0·2·(-1)) = (4+4)/8 = 1

mB2 = (1/8)(4·1 + 0·1 + 0·(-1) + 2·2·(-1) + 0·2·1) = (4-4)/8 = 0

mE = (1/8)(4·2 + 0·(-2) + 0·0 + 2·2·0 + 0·2·0) = 8/8 = 1

So the 4D permutation representation decomposes as:

A1 ⊕ B1 ⊕ E

We can check that the dimensions add up: 1 + 1 + 2 = 4. The permutation representation is reducible and breaks into exactly these three irreducible pieces. This is the character table at work: no matter how complicated a representation you start with, the decomposition formula tells you which irreducible building blocks it contains and how many of each.

6.8 Why This Matters

The character table might look like abstract bookkeeping, but it has direct applications wherever symmetry constrains structure:

In each case, the core idea is the same: symmetry constrains structure, representation theory turns those constraints into concrete, computable information. The character table is the lookup table that makes this practical.

7. Worked Examples and Comparisons

Brief reminder of the key points of examples we went through:

7.1 Standard 2D Rep Is Irreducible Over ℝ

No line through the origin is invariant under all of D4. Rotations by 90° map each candidate line to a different one, preventing a nontrivial invariant subspace.

7.2 Restricting to Rotations C4

Over the 2D action remains irreducible; over it splits into two 1D characters with eigenvalues i and -i. This corresponds to left- and right-circular "modes" (complex phases e±iπ/2).

7.3 Restricting to a Single Reflection C2

For s = [[1,0],[0,-1]], the x-axis (+1-eigenspace) and y-axis (-1-eigenspace) are invariant. Thus, as a C2-representation, 2 decomposes as a direct sum of two 1D irreducibles (trivial ⊕ sign).

8. Further Readings

Appendix: Notational Cheatsheet