Skip to content
GitLab

Week 4 - Functions

Functions - abstraction, labour, and manipulation

Section titled “Functions - abstraction, labour, and manipulation”

Welcome to Week 4. Over the past three weeks, we’ve been building a vocabulary: variables (containers for change), conditionals (decisions and categories), loops (repetition and time). This week, we encounter something more slippery, more ideological: the function.

A function is, at its most basic, a named sequence of instructions. It’s a way to package code into a reusable unit. You call it by name, it executes, and optionally returns a result. Simple, right?

But functions are never just technical conveniences. They are fundamentally about abstraction - about hiding complexity, creating interfaces, drawing boundaries between what you need to know and what you don’t. And abstraction is always political. It determines who has access to what, what gets made visible and what gets hidden, whose labour is acknowledged and whose is erased.

When you call circle(100, 100, 50), you don’t see the trigonometry that draws it, the pixels being set, the graphics pipeline processing it. The function circle() abstracts all of that away. This is convenient - you can draw circles without understanding Bézier curves. But it’s also a form of control - someone else decided what a circle is, how it should be drawn, what parameters it should take. The function is an interface, and every interface is an exercise of power.

The abstractions we inherit

We’ve been using p5.js functions all along: circle(), rect(), background(), fill(). These are abstractions that someone else built. Let’s look at them critically.

Consider circle(x, y, diameter). What does it hide?

  • The mathematical formula for a circle
  • The algorithm for rasterising curves into pixels
  • The graphics pipeline that actually renders it
  • The fact that it’s probably drawing many tiny line segments, not a true geometric circle
  • The computational cost (how many operations this takes)

The function gives you a clean interface: “draw a circle here, this big”. But underneath is complexity you don’t see.

This raises Højberg’s question: is circle() high-context or low-context? Does it give you clues about what it does and how it might fail? Or does it create assumptions that might be wrong? What if the circle is so large it goes off-canvas? What if the coordinates are negative? The function doesn’t tell you - you have to find out by trying, or by reading documentation (if it exists), or by looking at the source code (if you can find it).

Michael Murtaugh, in “Do (Not) Repeat Yourself”, examines the software engineering principle of DRY - Don’t Repeat Yourself. The industry mantra goes: if you’re writing the same code twice, abstract it into a function. Write it once, use it everywhere. This is supposedly about efficiency, about good practice, about clean code. But Murtaugh asks us to stop and think: Whose efficiency? What kind of labour does DRY privilege? What is lost when we eliminate repetition?

Murtaugh reveals that repetition is not waste - it’s how we learn. “There can be a tangible pleasure in quickly typing out the template of a familiar programming structure. Far from celebrating the birth of a unique new creation from scratch, it is rather a joyful expression of the pattern that increasingly becomes physically embodied in the programmer him/herself.” The act of typing the same pattern, feeling it in your fingers, is how skill develops. The push to eliminate repetition through abstraction actually eliminates the very process by which programmers develop craft.

Meanwhile, Simon Højberg, in “Code for people”, argues that we’ve been focused on the wrong audience. Code isn’t for computers - it’s for the programmers who will read it, maintain it, modify it. “Programmers, not machines, are the primary audience of our work.” But design principles like DRY, SOLID, KISS - these are ego-boosting exercises, “Principle Bingo” where we check boxes to feel clever. They don’t actually help the next person understand what the code does or why.

Højberg describes “low-context codebases” as swamps - hostile environments where every function call “stings like mosquitoes” and side effects ambush you “like venomous snake bites.” Abstraction, when done poorly or dogmatically, creates these swamps. Functions become black boxes that hide not just implementation details but intent, context, history. The next programmer - often your future self - becomes lost.

When we write a function, we’re not just organising code. We’re making claims about what matters, what should be grouped together, what should be separated, what should be visible and what should be hidden. These aren’t neutral technical choices - they’re choices about knowledge, power, and who gets to understand.

This week, we’re going to learn how to write functions. But more importantly, we’re going to question them. What do they hide? What do they reveal? For whom are we abstracting? And what happens when we break abstraction open and look at what’s underneath?

Abstraction in computing is often presented as purely beneficial - it manages complexity, enables reuse, creates modularity. But abstraction is also about distance. It’s about creating layers between you and the material reality of computation.

When you use a function like loadImage(), you’re abstracted away from:

  • The file system operations that load the file
  • The format-specific decoding (JPEG compression, PNG transparency)
  • The memory allocation for the pixel data
  • The colour space conversions
  • The error handling if the file doesn’t exist

All of this is hidden. And that hiding is useful - it would be exhausting to deal with all that every time you want to show an image. But it also means you don’t understand how images work, how they’re stored, how they’re compressed, what data they contain beyond what’s visible.

This is what Nathan Ensmenger calls “The Black Box and the White Box” in software history - functions are black boxes. You put inputs in, you get outputs out, but you don’t see the mechanism. And increasingly, as software becomes more abstract, more layered, more complex - we all work with black boxes we don’t understand.

But whose understanding matters? Højberg points out that code is designed for an audience, but we often forget who that audience is. It’s not the computer. It’s not even primarily yourself in the moment of writing. It’s the programmer who comes after - who needs to fix a bug, add a feature, understand what’s happening. When we optimise for our own cleverness, when we abstract to feel smart, we create swamps for others to navigate.

Murtaugh goes further. He argues that the very separation of “code” from “practice” is harmful. Bad code is said to have a “smell” - as if the code itself, independent of programmers, is the problem. But this displaces responsibility. It makes code seem like an autonomous thing with its own desires, rather than the product of human labour under particular conditions. When we talk about “code smells”, we avoid talking about overwork, about impossible deadlines, about systems that extract maximum productivity from programmers’ bodies.

And make no mistake - programming is bodily labour. Murtaugh describes the physical exhaustion of intense coding sessions: the loss of language, the inability to create meaningful names, the smells (is it the rubbish bin or is it me?), the need for extreme hobbies like rock climbing just to escape the intensity. This is not immaterial mental work. This is labour that consumes bodies.

Discussion questions

Before we write any functions, discuss:

  1. Think about interfaces you use daily (apps, websites, physical objects). What do they hide from you? What do they make visible? Is that hiding empowering or disempowering? Who decided what you get to see?

  2. Murtaugh argues that repetition is essential for learning and skill development. Can you think of examples from your own life where repetition wasn’t waste, but was how you learned something? What would it mean to value repetition in code?

  3. Højberg says design principles like DRY are “Principle Bingo” - checking boxes to feel clever without actually helping anyone understand the code. Have you experienced this? When has following a “best practice” made things worse instead of better?

  4. When you call a function like image() or circle(), you’re trusting that it does what it says. But how do you know? Who wrote it? What assumptions did they make? What biases might be encoded in their implementation? What if the function name lies?

  5. Think about “efficiency” - what does it mean? Efficient for whom? At what cost? When Amazon’s algorithms optimise delivery routes, they’re efficient - but efficient at the cost of driver wellbeing. What would it mean to write “inefficient” code that prioritises human understanding over machine efficiency?

  1. Michael Murtaugh, “Do (Not) Repeat Yourself” in Fun and Software: Exploring Pleasure, Paradox and Pain in Computing (2014)
  2. Simon Højberg, “Code for people” (2025)
  • Nathan Ensmenger, The Computer Boys Take Over (2010) - on the history of software abstraction
  • Wendy Chun, Programmed Visions (2011) - on software as ideology
  • Ellen Ullman, Close to the Machine (1997) - on the embodied experience of programming
  • American Artist, “Black Gooey Universe” (2018) - on interface, Blackness, and naming
  • Legacy Russell, Glitch Feminism (2020) - on breaking systems and refusal

We’ve already been using functions this whole time. Every time you write circle(), background(), random() - you’re calling functions that p5.js provides. But what does it mean to define our own?

A function is a named block of code that:

  1. Can be called by name
  2. Can receive inputs (parameters)
  3. Executes a sequence of instructions
  4. Can return an output

Here’s the basic syntax:

function myFunction() {
// code goes here
}

Let’s start simple. Remember last week when we drew a grid of circles? We had to write the nested loop every time. What if we could package that into a function?

function drawGrid() {
for (let x = 0; x < 10; x++) {
for (let y = 0; y < 10; y++) {
circle(20 + x * 40, 20 + y * 40, 30);
}
}
}
function setup() {
createCanvas(400, 400);
background(220);
drawGrid(); // call our function
}

Now instead of writing the nested loop every time, we just call drawGrid(). We’ve abstracted the grid-drawing logic into a named unit. This is the promise of functions: reusability, organisation, clarity.

But notice what we’ve hidden. When someone reads drawGrid(), they don’t see:

  • How many circles are drawn
  • What size they are
  • How they’re spaced
  • That it’s using nested loops
  • How computationally expensive it is

The function name promises a grid, but the implementation details are hidden. Højberg would ask: is this a “high-context” or “low-context” function? Does it give the next programmer clues about what it does, or does it create a swamp they’ll get lost in?

For the person calling the function, it’s simple. For the person trying to understand, debug, or modify it, it’s opaque. This is the fundamental tension of abstraction.

Let’s pause on this function name: drawGrid(). We’ve decided that this operation is called “drawing a grid”. But is it? It’s setting pixel values in a particular pattern. It’s executing nested loops. It’s consuming CPU cycles. It’s taking time. Why is it a “grid”? Because we say so.

Naming is power. When we name something, we’re making claims about what it is, what it does, who it’s for, what it’s similar to. The history of computing is full of violent naming - “master/slave” in databases, “kill” for stopping processes, “abort” for cancelling operations, “execute” for running code.

Our function names encode worldviews. When we write clean(), optimise(), normalise(), sanitise() - we’re making claims about what’s dirty, what’s optimal, what’s normal, what’s contaminated. And those claims are political.

Artist and theorist American Artist talks about how naming in computing is never neutral. Names create categories. Categories create hierarchies. Hierarchies create power structures. The seemingly simple act of naming a function is actually an act of world-making. (See: Black Gooey Universe)

What if we named our function makeCirclePattern()? Or repeatCircleDrawing()? Or consumeCPUCyclesDrawingCircles()? Each name emphasises different aspects, makes different things visible, hides different things.

Our drawGrid() function always draws the same grid. What if we want different grids? We add parameters - inputs that can vary.

function drawGrid(cols, rows, spacing) {
for (let x = 0; x < cols; x++) {
for (let y = 0; y < rows; y++) {
circle(spacing/2 + x * spacing, spacing/2 + y * spacing, spacing * 0.7);
}
}
}
function setup() {
createCanvas(400, 400);
background(220);
drawGrid(10, 10, 40); // 10x10 grid, 40px spacing
// drawGrid(5, 5, 80); // 5x5 grid, 80px spacing
// drawGrid(20, 20, 20); // 20x20 grid, 20px spacing
}

Now the function is parametric - it can produce different outputs based on inputs. The parameters cols, rows, and spacing are variables that exist only inside the function. When you call drawGrid(10, 10, 40), those values get assigned to the parameters.

But notice: we’ve decided which aspects of the grid are parametric. We can vary the number of columns, rows, and spacing. But what about:

  • The shape (why circles and not squares?)
  • The colour
  • The stroke weight
  • Whether there’s a fill or not
  • The starting position

These are fixed by the function implementation. The parameters we choose to expose define the interface of the function - what can be controlled from outside and what can’t. This is a choice about power and flexibility.

Højberg would say: have we given the next programmer enough clues? Can they understand from the function signature what it does? Or do they need to read the implementation? Have we made the function “high-context” - clear, explicit, hard to misuse - or “low-context” - ambiguous, tricky, full of hidden assumptions?

Parameters have an order. When you call drawGrid(10, 10, 40), the first 10 is cols, the second is rows, the third is spacing. If you get the order wrong, you get unexpected results - or worse, results that look right but aren’t.

This is a source of errors but also a design choice. The order of parameters encodes assumptions about what’s most important, what’s most commonly changed. In p5.js, rect(x, y, width, height) - position comes before size. Why? Because someone decided that’s the natural order. But it’s not universal - it’s cultural, conventional, arbitrary.

Some languages allow named parameters, where you write drawGrid(cols: 10, rows: 10, spacing: 40). This is more explicit but more verbose. The choice between positional and named parameters is a choice about clarity versus brevity - and different communities value these differently.

But here’s the thing: both choices privilege certain programmers. Positional parameters privilege those who already know the function, who’ve memorised the order. Named parameters privilege clarity but require more typing. There is no neutral choice.

So far, our function does something (draws circles) but doesn’t return anything. Functions can also return values using the return keyword.

function calculateArea(width, height) {
let area = width * height;
return area;
}
function setup() {
createCanvas(400, 400);
let rectArea = calculateArea(100, 50);
text("Area: " + rectArea, 10, 20);
}

Now calculateArea() computes something and gives it back. The value after return becomes the result of calling the function. You can store it in a variable, use it in calculations, pass it to other functions.

Return values let functions be transformative rather than just performative. Instead of just doing something visible (like drawing), they can compute something and hand it back for further use.

Let’s combine parameters and return values:

function remapValue(value, start1, stop1, start2, stop2) {
return start2 + (stop2 - start2) * ((value - start1) / (stop1 - start1));
}
function setup() {
createCanvas(400, 400);
}
function draw() {
// Map mouseX (0-400) to a greyscale value (0-255)
let grey = remapValue(mouseX, 0, width, 0, 255);
background(grey);
}

Wait - p5.js already has a map() function that does exactly this! We just reimplemented it. This is instructive: every function in p5.js was written by someone. map() is just code that someone packaged up and gave a name. There’s no magic - it’s maths wrapped in abstraction.

Variables have scope - they exist in certain contexts and not others. This is about access, visibility, and encapsulation.

let globalValue = 100; // global scope
function setup() {
let localValue = 50; // local to setup()
createCanvas(400, 400);
}
function draw() {
background(220);
circle(globalValue, 200, 50); // can access globalValue
// circle(localValue, 200, 50); // ERROR: localValue doesn't exist here
}

Variables declared outside any function are global - visible everywhere. Variables declared inside a function are local - visible only inside that function.

Why does this matter? Scope is about encapsulation - keeping things separate, preventing unintended interference. Global variables can be accessed and modified from anywhere, which can lead to confusing bugs. Local variables are contained - they can’t leak out and affect other parts of the code.

But scope is also about power. Global variables are accessible to everyone - they’re common resources, shared space. Local variables are private - only the function that created them can use them. Encapsulation can be protective (preventing interference) or restrictive (preventing access).

Let’s see a more complex example:

let circleX = 200; // global
let circleY = 200; // global
function moveCircle() {
let speed = 5; // local to moveCircle
circleX += speed;
if (circleX > width) {
circleX = 0;
}
}
function setup() {
createCanvas(400, 400);
}
function draw() {
background(220);
moveCircle(); // modifies global circleX
circle(circleX, circleY, 50);
// console.log(speed); // ERROR: speed doesn't exist here
}

circleX and circleY are global - both moveCircle() and draw() can access them. speed is local to moveCircle() - it only exists inside that function.

This creates a hierarchy of visibility:

  • Global variables: visible everywhere
  • Local variables: visible only in their function
  • Function parameters: visible only in their function (they’re like local variables)

Encapsulation can be emancipatory - it lets you hide complexity, create clean boundaries, prevent interference. But it can also be oppressive - it can hide exploitation, encode biases, make systems illegible.

When facial recognition is wrapped in a simple function call, the violence of surveillance is abstracted away. When content moderation algorithms are encapsulated in proprietary functions, the labour conditions of moderators are hidden. When recommendation algorithms are black-boxed, the amplification of extremism is obscured.

The question isn’t whether encapsulation is good or bad - it’s: encapsulation for whom? Who benefits from the hiding? Who is harmed by the opacity? Who gets to see inside the black box and who doesn’t?

The DRY principle: don’t repeat yourself

Section titled “The DRY principle: don’t repeat yourself”

Software engineering has a principle: DRY - Don’t Repeat Yourself. If you’re writing the same code in multiple places, you should abstract it into a function. Write it once, call it many times.

Here’s a violation of DRY:

function setup() {
createCanvas(400, 400);
background(220);
// Draw three circles - repetitive!
fill(255, 0, 0);
circle(100, 200, 50);
fill(0, 255, 0);
circle(200, 200, 50);
fill(0, 0, 255);
circle(300, 200, 50);
}

The DRY way:

function drawColouredCircle(x, r, g, b) {
fill(r, g, b);
circle(x, 200, 50);
}
function setup() {
createCanvas(400, 400);
background(220);
drawColouredCircle(100, 255, 0, 0);
drawColouredCircle(200, 0, 255, 0);
drawColouredCircle(300, 0, 0, 255);
}

We’ve eliminated repetition by abstracting the pattern into a function. This is supposedly “cleaner”, more “maintainable”. If we want to change how coloured circles are drawn, we only need to change one place.

But Murtaugh asks: at what cost? The first version is more repetitive, but it’s also more explicit. You can see exactly what’s happening - three circles, three colours, three positions. The second version is DRYer, but it’s also more abstract. You have to understand what drawColouredCircle() does. You have to trust that the function does what its name says.

And Højberg would add: which version gives better clues to the next programmer? Which is higher-context? The repetitive version shows the pattern explicitly. The abstracted version hides it inside a function that might or might not do what you expect.

What if repetition isn’t waste but emphasis? What if seeing the same code three times makes the pattern more visible, not less?

Murtaugh writes: “In poetry, repetition creates rhythm, emphasis, meaning. ‘I have a dream’ repeated is powerful because it repeats. In music, repetition creates structure.” A loop that’s abstracted away loses its experiential quality. The act of typing the same structure three times is how it gets into your fingers, into your body, into your muscle memory.

“There can be a tangible pleasure in quickly typing out the template of a familiar programming structure. Far from celebrating the birth of a unique new creation from scratch, it is rather a joyful expression of the pattern that increasingly becomes physically embodied in the programmer him/herself.”

This is skill development. This is craft. The push to eliminate all repetition through abstraction actually eliminates the process by which we learn, by which patterns become embodied knowledge.

Murtaugh also points out the absurdity: “The very formulation of ‘Don’t Repeat Yourself’ as a kind of a programmer’s mantra, and thus to be recursively repeated, is also absurd.” We’re told not to repeat ourselves by repeating a principle. The principle contradicts itself.

Moreover, repetition is essential to free software communities. The GNU project (GNU’s Not UNIX - itself a recursive repetition) and free software in general are “a rich tapestry of duplication, forked projects and reinventions of the proverbial wheel.” The term ‘yet another’ is common in project names - “Yet Another Markup Language”, “Yet Another Perl Conference”. This is humorous acknowledgment that repetition, far from being waste, is how communities learn, experiment, and create alternatives.

DRY is an ideology of efficiency that privileges the writer over the reader, the future over the present, optimisation over understanding, the abstract over the concrete, the singular over the multiple. Sometimes the “worse” code - repetitive, explicit, verbose - is better code. More legible, more honest, more human.

This doesn’t mean never use functions. It means: question the reflex to abstract. Ask what’s gained and what’s lost. Ask for whom you’re optimising. Ask what you’re hiding and why.

Working with images: abstraction layered on abstraction

Section titled “Working with images: abstraction layered on abstraction”

Now let’s talk about images. Images are already heavily abstracted - they’re grids of coloured pixels, but we experience them as pictures, as representations. When we bring images into p5.js, we add another layer of abstraction.

To use an image in p5.js, we first need to load it. p5.js has a special function called preload() that runs before setup():

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300'); //
// picsum is like a lorem ipsum for images.
// This URL gives us a random image of 400x300 pixels.
}
function setup() {
createCanvas(400, 400);
}
function draw() {
background(220);
image(img, 0, 0);
}

loadImage() is a function that abstracts away:

  • HTTP requests (if loading from URL)
  • File system operations (if loading locally)
  • Image format decoding (JPEG, PNG, GIF have different compression algorithms)
  • Colour space conversions
  • Memory allocation for pixel data
  • Error handling

All of this is hidden behind one function call. Convenient? Yes. Transparent? No.

Once loaded, we display images with image():

image(img, x, y); // draw at x, y, original size
image(img, x, y, width, height); // draw at x, y, scaled

Simple interface. But what’s it doing? It’s:

  • Reading pixel data from memory
  • Optionally scaling/resampling pixels
  • Mapping pixels to screen coordinates
  • Handling transparency/alpha channels
  • Applying any active tint or blend modes

Again: convenience through opacity. The function does many things, but from the outside, you can’t tell which. This is what Højberg calls a “low-context” interface - you have to guess, or experiment, or read documentation to understand what’s really happening.

p5.js gives us some simple image manipulation functions:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 400);
}
function draw() {
background(220);
// Tint the image red
tint(255, 0, 0);
image(img, 0, 0);
// Remove tint for items after this
noTint();
}

tint

tint() multiplies each pixel’s colour by the tint colour. It’s a simple operation at the pixel level, but abstracted into a function that applies to the whole image. But do you know that’s what it’s doing? Or do you just trust that “tint” means “make it more red”?

These are useful abstractions, but they’re also acts of interpretation. Someone decided that “tinting” means colour multiplication. Someone decided that tint(255, 0, 0) means “more red”. These choices aren’t universal - they’re design decisions that encode particular ways of thinking about colour and images.

Breaking abstraction: understanding pixel data

Section titled “Breaking abstraction: understanding pixel data”

Now we get to the interesting part. Every image is, at its core, a grid of pixels - tiny coloured dots arranged in rows and columns. p5.js normally hides this from you. But we can access it directly.

Before we do that, we need to understand a concept we haven’t covered yet: lists of values.

Imagine you want to store the scores of 5 players. You could do this:

let score1 = 10;
let score2 = 15;
let score3 = 8;
let score4 = 20;
let score5 = 12;

But this is tedious. What if you have 100 players? 1000? Instead, we can store them in a list - a collection of values in order. In JavaScript (and so p5.js), we call this an array.

Here’s how it works:

let scores = [10, 15, 8, 20, 12]; // An array of 5 numbers

The square brackets [] create the list. Each value is separated by a comma. We can access individual values using their position (starting from 0):

let scores = [10, 15, 8, 20, 12];
console.log(scores[0]); // 10 (first value)
console.log(scores[1]); // 15 (second value)
console.log(scores[2]); // 8 (third value)
console.log(scores[4]); // 12 (fifth value)

And these values need not just be numbers, they can be any data type. For example, we can store strings:

let pets = ["cat", "dog", "mouse", "rabbit", "hamster"];

or a mix of data types:

let mixed = [10, "cat", true, 3.14];

In fact, we can store any data type in an array - numbers, strings, booleans, objects, functions, even other arrays. Arrays are a very flexible data structure that can hold any kind of value. Arrays can also be made of arrays, creating a nested structure.

let nestedArray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];

nested array

Image source.

This is what we call a multi-dimensional array or nested array. To be a little more specific, this is a 2D array. And we can access the values in the nested array using their position, first the outer array, then the inner array:

let nestedArray = [[1, 2, 3], [4, 5, 6], [7, 8, 9]];
console.log(nestedArray[0][0]); // 1
console.log(nestedArray[1][2]); // 6
console.log(nestedArray[2][1]); // 8

We can also find out how many values are in the list:

let scores = [10, 15, 8, 20, 12];
console.log(scores.length); // 5

And we can change values:

let scores = [10, 15, 8, 20, 12];
scores[0] = 25; // Change first value to 25
console.log(scores); // [25, 15, 8, 20, 12]

We can use loops to go through all the values:

let scores = [10, 15, 8, 20, 12];
for (let i = 0; i < scores.length; i++) {
console.log("Player " + i + " score:", scores[i]);
}

This is the basics of arrays. We’ll learn much more about them later. For now, this is enough to understand pixel data.

p5.js offers a function called loadPixels() that allows us to access the pixel data of an image or the canvas. When you call the function, p5.js populates a special array called pixels[] with the colour data of every pixel on the canvas or in an image.

Here’s the key concept: every pixel is represented by 4 numbers:

  • Red (0-255)
  • Green (0-255)
  • Blue (0-255)
  • Alpha/transparency (0-255)

pixels

img source.

So for a 400×400 canvas, we have:

  • 400 × 400 = 160,000 pixels
  • 160,000 × 4 = 640,000 numbers in the array

Let’s see it:

function setup() {
createCanvas(400, 400);
background(220);
circle(200, 200, 100);
loadPixels(); // Populate the pixels[] array
// pixels[] now contains 640,000 numbers
// Format: [r0, g0, b0, a0, r1, g1, b1, a1, r2, g2, b2, a2, ...]
console.log("Total values in pixels[]:", pixels.length);
console.log("First pixel - Red:", pixels[0]);
console.log("First pixel - Green:", pixels[1]);
console.log("First pixel - Blue:", pixels[2]);
console.log("First pixel - Alpha:", pixels[3]);
}
Console
Total values in pixels[]: 2560000
First pixel - Red: 220
First pixel - Green: 220
First pixel - Blue: 220
First pixel - Alpha: 255

This is the raw data. No abstraction, no interface, no convenience. Just numbers in a list. This is what Murtaugh means when he talks about the material reality of code - this is what’s actually there, underneath all the friendly function names.

The array is one-dimensional (a single list), but it represents a two-dimensional image (rows and columns). How?

The pixels are stored row by row, left to right, top to bottom:

Row 0: pixel(0,0), pixel(1,0), pixel(2,0), ..., pixel(399,0)
Row 1: pixel(0,1), pixel(1,1), pixel(2,1), ..., pixel(399,1)
Row 2: pixel(0,2), pixel(1,2), pixel(2,2), ..., pixel(399,2)
...

To access a specific pixel at coordinates (x, y), we need to calculate its position in the array:

Formula: index = (y * width + x) * 4

Let’s break this down:

  1. y * width tells us how many pixels come before this row
  2. + x adds the position within the row
  3. * 4 because each pixel takes 4 values (R, G, B, A)

Here’s an example:

function setup() {
createCanvas(400, 400);
background(220);
fill(255, 0, 0);
circle(200, 200, 100);
loadPixels();
// Get the colour of pixel at (200, 200)
let x = 200;
let y = 200;
let index = (y * width + x) * 4;
let r = pixels[index];
let g = pixels[index + 1];
let b = pixels[index + 2];
let a = pixels[index + 3];
console.log(`Pixel at (${x}, ${y}):`, r, g, b, a);
}

This formula - (y * width + x) * 4 - is how two-dimensional space (the image) gets flattened into one-dimensional memory (the array). This is a fundamental operation in computing: mapping multi-dimensional structures onto linear memory. It’s not the only way to do it, but it’s become convention. And conventions, as we’ve seen, have consequences.

We can also write to the pixel array to change what’s displayed:

function setup() {
createCanvas(400, 400);
background(220);
fill(255, 0, 0);
circle(200, 200, 100);
loadPixels();
// Invert all colours
for (let i = 0; i < pixels.length; i += 4) {
pixels[i] = 255 - pixels[i]; // invert red
pixels[i + 1] = 255 - pixels[i + 1]; // invert green
pixels[i + 2] = 255 - pixels[i + 2]; // invert blue
// alpha (i + 3) stays the same
}
updatePixels(); // Apply the changes to the canvas
}

We’re directly manipulating the pixel data. This is low-level, but it’s also powerful. We’re not limited to what p5.js provides - we can implement any pixel operation we can imagine.

Let’s look at the loop more carefully:

for (let i = 0; i < pixels.length; i += 4) {
// Process one pixel
}

We start at 0 and increment by 4 each time (i += 4). Why 4? Because each pixel takes 4 values. So i lands on:

  • 0 (first pixel’s red)
  • 4 (second pixel’s red)
  • 8 (third pixel’s red)
  • etc.

Then we access the other channels:

  • pixels[i] is red
  • pixels[i + 1] is green
  • pixels[i + 2] is blue
  • pixels[i + 3] is alpha

This is Murtaugh’s point about repetition and learning: the first time you write this loop, you might not understand it. The second time, you start to see the pattern. By the tenth time, it’s in your fingers - you can type it without thinking. The repetition isn’t waste; it’s how the pattern becomes embodied knowledge.

The same technique works with images loaded from files:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 400);
// Load the image's pixels into its pixels[] array
img.loadPixels();
// Make it greyscale
for (let i = 0; i < img.pixels.length; i += 4) {
// Average the RGB values
let grey = (img.pixels[i] + img.pixels[i + 1] + img.pixels[i + 2]) / 3;
img.pixels[i] = grey;
img.pixels[i + 1] = grey;
img.pixels[i + 2] = grey;
}
// Update the image
img.updatePixels();
// Display it
image(img, 0, 0);
}

Now we’re operating on the image’s pixel data directly. We’ve bypassed all of p5’s image manipulation functions and gone straight to the data. We’ve broken the abstraction.

Notice what we’re doing here: taking colour (which we experience as red, green, blue) and reducing it to a single number (grey). This is a lossy transformation - information is destroyed. The formula we use (average of RGB) is just one way to convert to greyscale. There are others (weighted averages, luminance calculations). Each produces different results. Each encodes different assumptions about what “brightness” means.

This is what happens when you break abstraction - you see the arbitrary choices underneath. The formula isn’t natural or inevitable; it’s a decision someone made.

p5.js also has a handy createCapture() function that allows you to work with the webcam. You can use it to create a live feed of the webcam, or to capture a still image. In a way, video is just a series of images (or frames), and so we can treat it as such. So everything we’ve learned about images applies to video as well.

let video;
function preload() {
video = createCapture(VIDEO);
}
function setup() {
createCanvas(400, 300);
video.hide();
}
function draw() {
image(video, 0, 0);
}

So we can update the previous code by replacing the image with the video feed:

let video;
function setup() {
createCanvas(400, 400);
video = createCapture(VIDEO);
}
function draw() {
video.loadPixels();
for (let i = 0; i < video.pixels.length; i += 4) {
let grey = (video.pixels[i] + video.pixels[i + 1] + video.pixels[i + 2]) / 3;
video.pixels[i] = grey;
video.pixels[i + 1] = grey;
video.pixels[i + 2] = grey;
}
video.updatePixels();
image(video, 0, 0);
}

When we access pixels directly, we can do things that aren’t possible through normal functions. We can corrupt images, glitch them, reveal their underlying structure.

Glitch art has a history. In the 1990s and 2000s, artists like Rosa Menkman, Phillip Stearns, and Kim Asendorf started deliberately corrupting digital images and videos to expose their underlying structure as data. This wasn’t just aesthetic experimentation - it was epistemological. As Menkman argues, glitches reveal how systems work by showing how they break. The glitch is a “moment/um” where the normally invisible becomes visible.

let video;
function setup() {
createCanvas(400, 300);
video = createCapture(VIDEO);
}
function draw() {
video.loadPixels();
let shiftAmount = 20;
// Store shifted red channel values temporarily
let newRedValues = [];
// First pass: calculate new red values
for (let y = 0; y < video.height; y++) {
for (let x = 0; x < video.width; x++) {
let index = (y * video.width + x) * 4;
// Calculate shifted position for red channel
let shiftedX = (x + shiftAmount) % video.width;
let shiftedIndex = (y * video.width + shiftedX) * 4;
// Store the red value we want at this position
newRedValues[index] = video.pixels[shiftedIndex];
}
}
// Second pass: apply new red values
for (let i = 0; i < video.pixels.length; i += 4) {
video.pixels[i] = newRedValues[i];
}
video.updatePixels();
image(video, 0, 0);
}

This creates a chromatic aberration effect - the red channel is shifted horizontally. This kind of glitch reveals the image as three separate colour channels, not a unified picture. You can see the structure underneath - the fact that colour is stored as separate R, G, B values, not as a unified experience.

This is what Menkman calls “vernacular of file formats” - understanding images by breaking them, by seeing their structure exposed. The glitch is pedagogical.

Let’s create another effect - converting to pure black and white based on brightness:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
img.loadPixels();
let threshold = 128; // midpoint between 0 and 255
for (let i = 0; i < img.pixels.length; i += 4) {
// Calculate brightness
let brightness = (img.pixels[i] + img.pixels[i + 1] + img.pixels[i + 2]) / 3;
// If brighter than threshold, make white. Otherwise, black.
let newValue = 0;
if (brightness > threshold) {
newValue = 255;
}
img.pixels[i] = newValue;
img.pixels[i + 1] = newValue;
img.pixels[i + 2] = newValue;
}
img.updatePixels();
image(img, 0, 0);
}

This creates a stark, high-contrast image. All the subtle gradations are gone - only black and white remain. This is a form of violence to the image, a reduction of complexity. But it’s also historically significant.

Early computer displays could only show black and white. Early image processing algorithms used thresholding because it was computationally cheap. What we’re doing here - this brutal binary reduction - is how images were first processed by computers. We’re recreating a historical constraint.

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
img.loadPixels();
let levels = 4; // number of levels per channel
let step = 255 / (levels - 1);
for (let i = 0; i < img.pixels.length; i += 4) {
// Quantise each channel
let r = img.pixels[i];
let g = img.pixels[i + 1];
let b = img.pixels[i + 2];
// Round to nearest level
r = round(r / step) * step;
g = round(g / step) * step;
b = round(b / step) * step;
img.pixels[i] = r;
img.pixels[i + 1] = g;
img.pixels[i + 2] = b;
}
img.updatePixels();
image(img, 0, 0);
}

Posterisation reduces the number of colours in an image. With only 4 levels per channel, we get 4×4×4 = 64 possible colours instead of 256×256×256 = 16.7 million. This reveals how much information is usually hidden in smooth gradients.

Again, this has historical roots. Early computer displays and printers had limited colour palettes - 16 colours, 256 colours. Posterisation recreates this constraint. What looks like an aesthetic choice is actually revealing a material history of computing hardware.

We can go further. What if we treat image data as just numbers and corrupt them randomly?

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
img.loadPixels();
let corruptionAmount = 1000; // how many pixels to corrupt
// Randomly corrupt pixels
for (let n = 0; n < corruptionAmount; n++) {
// Pick a random pixel (and making sure it's a valid starting index)
let randomPixelNumber = floor(random(img.pixels.length / 4));
let randomIndex = randomPixelNumber * 4;
// Set it to a random colour for r, g and b values
img.pixels[randomIndex] = random(255);
img.pixels[randomIndex + 1] = random(255);
img.pixels[randomIndex + 2] = random(255);
}
img.updatePixels();
image(img, 0, 0);
}

We don’t even need to load images - we can create them pixel by pixel using just numbers and some maths.

function setup() {
createCanvas(400, 400);
loadPixels();
// Creating a gradient
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let index = (y * width + x) * 4;
// Red increases left to right
pixels[index] = (x / width) * 255;
// Green increases top to bottom
pixels[index + 1] = (y / height) * 255;
// Blue is constant
pixels[index + 2] = 128;
// Alpha is opaque
pixels[index + 3] = 255;
}
}
updatePixels();
}

This creates an image mathematically. No photo, no file - just numbers generated by a formula. This is procedural image generation. No camera, no photographer, no subject. Just maths.

We can get more complex using p5’s noise() function:

function setup() {
createCanvas(400, 400);
loadPixels();
// Create noise pattern
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let index = (y * width + x) * 4;
// Use p5's noise function
let n = noise(x * 0.01, y * 0.01);
let grey = n * 255;
pixels[index] = grey;
pixels[index + 1] = grey;
pixels[index + 2] = grey;
pixels[index + 3] = 255;
}
}
updatePixels();
}

Or create animated interference patterns:

let time = 0;
function setup() {
createCanvas(400, 400);
}
function draw() {
loadPixels();
for (let y = 0; y < height; y++) {
for (let x = 0; x < width; x++) {
let index = (y * width + x) * 4;
// Create interference pattern
let d1 = dist(x, y, width/3, height/2);
let d2 = dist(x, y, 2*width/3, height/2);
let wave1 = sin(d1 * 0.05 - time);
let wave2 = sin(d2 * 0.05 - time);
let interference = (wave1 + wave2) / 2;
let brightness = map(interference, -1, 1, 0, 255);
pixels[index] = brightness;
pixels[index + 1] = brightness;
pixels[index + 2] = brightness;
pixels[index + 3] = 255;
}
}
updatePixels();
time += 0.05;
}

This creates animated interference patterns - purely mathematical, purely procedural. No photographic source, just algorithms generating visual patterns. This is what John Whitney was doing in the 1960s with his analogue computers - using mathematics to generate moving images. What required specialised hardware then, we can now do with a few lines of JavaScript.

Now let’s bring it all together - write functions that operate on pixels. This is where we create our own abstractions.

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
invertImage(img); // use our function
image(img, 0, 0);
}
function invertImage(img) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = 255 - img.pixels[i];
img.pixels[i + 1] = 255 - img.pixels[i + 1];
img.pixels[i + 2] = 255 - img.pixels[i + 2];
}
img.updatePixels();
}

Now invertImage() is a reusable function that inverts any image. We’ve created our own abstraction - a black box that does something useful. But unlike p5’s built-in functions, we wrote this one. We know what’s inside. We control what it hides.

This is the paradox Murtaugh identifies: we need abstraction (complexity is unmanageable without it), but we must also be able to break it (to understand, to modify, to learn). By writing our own functions, we’re on both sides - we’re abstracting for future use, but we also understand what’s being abstracted because we just wrote it.

Let’s make our functions more flexible:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
adjustBrightness(img, 50); // brighten by 50
// adjustBrightness(img, -30); // darken by 30
image(img, 0, 0);
}
function adjustBrightness(img, amount) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = constrain(img.pixels[i] + amount, 0, 255);
img.pixels[i + 1] = constrain(img.pixels[i + 1] + amount, 0, 255);
img.pixels[i + 2] = constrain(img.pixels[i + 2] + amount, 0, 255);
}
img.updatePixels();
}

The amount parameter makes this function flexible. Positive values brighten, negative values darken. We’ve exposed one aspect of the operation as controllable from outside.

But notice: we’ve made amount parametric, but not other things. We haven’t made the constrain range parametric. We haven’t made the operation itself parametric (what if we want to multiply instead of add?). These are choices. Each parameter we add makes the function more flexible but also more complex. Each parameter is another thing the caller needs to understand.

Højberg would ask: have we given enough clues? Is it obvious what amount means? What happens if you pass 1000? Or -1000? The function will work (thanks to constrain), but does the name suggest these boundaries?

We can call multiple functions in sequence to create complex effects:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
// Chain operations
invertImage(img);
adjustBrightness(img, -30);
image(img, 0, 0);
}
function invertImage(img) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = 255 - img.pixels[i];
img.pixels[i + 1] = 255 - img.pixels[i + 1];
img.pixels[i + 2] = 255 - img.pixels[i + 2];
}
img.updatePixels();
}
function adjustBrightness(img, amount) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = constrain(img.pixels[i] + amount, 0, 255);
img.pixels[i + 1] = constrain(img.pixels[i + 1] + amount, 0, 255);
img.pixels[i + 2] = constrain(img.pixels[i + 2] + amount, 0, 255);
}
img.updatePixels();
}

This is building a processing pipeline - a sequence of transformations. Each function does one thing, and we combine them to create complex effects. This is modular abstraction - small pieces that can be rearranged.

This is DRY in action: instead of writing the invert code and the brightness code together every time, we write each once and combine them. But notice what we’ve lost: when you read invertImage(img); adjustBrightness(img, -30);, you don’t see the pixel loops, you don’t see the calculations. You have to trust that the functions do what their names say. You’ve traded explicitness for reusability.

Is this better? It depends. For someone who knows what these functions do, it’s cleaner. For someone trying to understand the code for the first time, it’s more opaque. Murtaugh would remind us: there’s value in seeing the repeated structure. Højberg would ask: do these function names give enough clues?

Functions with return values: analysing images

Section titled “Functions with return values: analysing images”

All our functions so far modify the image directly - they’re destructive. What if we want to analyse an image without changing it?

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
let avgBrightness = getAverageBrightness(img);
console.log("Average brightness:", avgBrightness);
image(img, 0, 0);
// Display the value
fill(0);
textSize(16);
text("Avg Brightness: " + round(avgBrightness), 10, 20);
}
function getAverageBrightness(img) {
img.loadPixels();
let totalBrightness = 0;
let pixelCount = img.pixels.length / 4; // Divide by 4 because each pixel is 4 values
for (let i = 0; i < img.pixels.length; i += 4) {
let r = img.pixels[i];
let g = img.pixels[i + 1];
let b = img.pixels[i + 2];
let brightness = (r + g + b) / 3;
totalBrightness += brightness;
}
let averageBrightness = totalBrightness / pixelCount;
return averageBrightness;
}

This function analyses the image without modifying it. It calculates something and returns the result. We can use this information to make decisions about how to process the image.

Notice the naming: getAverageBrightness. The “get” prefix is a convention that suggests “this function returns a value without changing things”. But it’s just a convention - nothing enforces it. The function could secretly modify the image. This is what Højberg means about “low-context” code - you have to trust conventions, or read the implementation, or hope there’s documentation.

Let’s use this analysis function to make decisions:

let img;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
// Auto-adjust brightness based on average
let avgBrightness = getAverageBrightness(img);
if (avgBrightness < 100) {
// Image is dark, brighten it
adjustBrightness(img, 50);
} else if (avgBrightness > 155) {
// Image is bright, darken it
adjustBrightness(img, -50);
}
image(img, 0, 0);
}
function getAverageBrightness(img) {
img.loadPixels();
let totalBrightness = 0;
let pixelCount = img.pixels.length / 4;
for (let i = 0; i < img.pixels.length; i += 4) {
let brightness = (img.pixels[i] + img.pixels[i + 1] + img.pixels[i + 2]) / 3;
totalBrightness += brightness;
}
return totalBrightness / pixelCount;
}
function adjustBrightness(img, amount) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = constrain(img.pixels[i] + amount, 0, 255);
img.pixels[i + 1] = constrain(img.pixels[i + 1] + amount, 0, 255);
img.pixels[i + 2] = constrain(img.pixels[i + 2] + amount, 0, 255);
}
img.updatePixels();
}

Now we’re using one function to analyse and another to modify. We’re building a system where functions work together. This is composition - small functions combined to create larger behaviours.

But notice: we’re making aesthetic decisions (what counts as “too dark” or “too bright”) and encoding them in code. These thresholds - 100, 155 - are arbitrary. Different choices would produce different results. The code looks objective (“auto-adjust”) but it’s actually full of subjective judgments.

Let’s make image processing interactive using what we learnt in previous weeks:

let img;
let originalImg;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
// Store a copy of the original
originalImg = createImage(img.width, img.height);
originalImg.copy(img, 0, 0, img.width, img.height, 0, 0, img.width, img.height);
}
function draw() {
// Reset to original each frame
img.copy(originalImg, 0, 0, img.width, img.height, 0, 0, img.width, img.height);
// Adjust based on mouseX
let brightnessAmount = map(mouseX, 0, width, -100, 100);
adjustBrightness(img, brightnessAmount);
image(img, 0, 0);
// Display value
fill(255);
stroke(0);
strokeWeight(3);
textSize(16);
text("Brightness: " + round(brightnessAmount), 10, 20);
}
function adjustBrightness(img, amount) {
img.loadPixels();
for (let i = 0; i < img.pixels.length; i += 4) {
img.pixels[i] = constrain(img.pixels[i] + amount, 0, 255);
img.pixels[i + 1] = constrain(img.pixels[i + 1] + amount, 0, 255);
img.pixels[i + 2] = constrain(img.pixels[i + 2] + amount, 0, 255);
}
img.updatePixels();
}

Now moving your mouse left and right changes the brightness in real-time. We’ve made the abstraction interactive. The function becomes an instrument - you can play with it, explore its parameter space, understand its behaviour through interaction rather than reading code.

This is a different kind of understanding. Instead of reading the implementation, you explore the possibility space. This is what Murtaugh means when he talks about the “quasi-ejaculatory nature” of finding the right abstraction - there’s a pleasure in discovering the behaviour through use, through repetition, through embodied interaction.

Let’s create a glitch function that takes an intensity parameter:

let img;
let originalImg;
function preload() {
img = loadImage('https://picsum.photos/400/300');
}
function setup() {
createCanvas(400, 300);
originalImg = createImage(img.width, img.height);
originalImg.copy(img, 0, 0, img.width, img.height, 0, 0, img.width, img.height);
}
function draw() {
// Reset
img.copy(originalImg, 0, 0, img.width, img.height, 0, 0, img.width, img.height);
// Glitch based on mouseX
let glitchIntensity = map(mouseX, 0, width, 0, 1);
glitchImage(img, glitchIntensity);
image(img, 0, 0);
// Display value
fill(255);
stroke(0);
strokeWeight(3);
text("Glitch: " + round(glitchIntensity * 100) + "%", 10, 20);
}
function glitchImage(img, intensity) {
img.loadPixels();
// Number of glitches based on intensity
let glitchCount = floor(intensity * 100);
for (let n = 0; n < glitchCount; n++) {
// Pick random pixel
let randomPixelNumber = floor(random(img.pixels.length / 4));
let randomIndex = randomPixelNumber * 4;
// Decide what to do randomly
let glitchType = floor(random(3));
if (glitchType == 0) {
// Set to random colour
img.pixels[randomIndex] = random(255);
img.pixels[randomIndex + 1] = random(255);
img.pixels[randomIndex + 2] = random(255);
} else if (glitchType == 1) {
// Invert
img.pixels[randomIndex] = 255 - img.pixels[randomIndex];
img.pixels[randomIndex + 1] = 255 - img.pixels[randomIndex + 1];
img.pixels[randomIndex + 2] = 255 - img.pixels[randomIndex + 2];
} else {
// Set to black
img.pixels[randomIndex] = 0;
img.pixels[randomIndex + 1] = 0;
img.pixels[randomIndex + 2] = 0;
}
}
img.updatePixels();
}

Now we have a glitch function that takes an intensity parameter. The glitch becomes an instrument we can play. Move your mouse and watch the image degrade in real-time. This is what Rosa Menkman calls “the glitch as a tool” - not just an accident or error, but a deliberate technique for revealing structure.


Reflection: functions as power and resistance

Section titled “Reflection: functions as power and resistance”

Let’s step back and think about what we’ve done.

We started by learning to write functions - to create abstractions, to package code into reusable units. We saw how this is useful for organisation and reuse. But we also questioned it. We asked: what does abstraction hide? Whose labour? What assumptions? For whom are we optimising?

Then we looked at p5.js’s built-in functions for images - loadImage(), image(), tint(). These are convenient abstractions that hide technical complexity. But they also limit what we can do. They provide an interface, and every interface is a constraint. Every interface is a choice about what’s easy and what’s hard, what’s visible and what’s hidden.

So we broke the abstraction. We accessed the pixel data directly using loadPixels() and the pixels[] array. We learned about arrays - lists of values - as a way to understand how pixel data is stored. We manipulated images at the lowest level - individual colour values stored in a long list of numbers. We created glitches, corruptions, procedural patterns. We saw what’s underneath the clean interface.

And then - and this is important - we built our own abstractions on top of that. We wrote functions like invertImage() and adjustBrightness() that operate on pixels. We created our own interfaces, our own black boxes. But because we wrote them ourselves, because we understand what’s inside, these aren’t hostile abstractions. They’re not swamps. They’re paths we’ve built for ourselves (and maybe for others).

This is the dialectic Murtaugh and Højberg both point to: we need abstraction (complexity is unmanageable without it), but we must also be able to break it (to understand, to modify, to learn, to resist). Functions are tools of power - they determine what’s easy and what’s hard, what’s visible and what’s hidden, who can understand and who can’t. But we can write our own functions. We can create our own abstractions. We can decide what to hide and what to reveal. We can build high-context code instead of low-context swamps.

As you work with functions and images this week, consider:

  1. Abstraction and access: When p5.js provides tint() as an abstraction, it makes tinting easy. But it also means you might never learn how colour multiplication works at the pixel level. Is convenience worth the loss of knowledge? Who benefits from your not knowing?

  2. Repetition and skill: Murtaugh argues that repetition is how we develop craft. When you write the same pixel-manipulation loop three times, four times, ten times - does it become easier? Does it get into your fingers? Is the repetition waste, or is it learning?

  3. Context and clarity: Højberg argues that code should be written for the next programmer. When you write a function, are you giving enough clues? Or are you creating a swamp? What makes code “high-context”?

  4. Naming and power: When you name a function normalise() or clean() or correct(), you’re making claims about what’s normal, clean, correct. Whose standards are you encoding? What alternatives might exist?

  5. Images and bodies: If you’re manipulating images of people - faces, bodies - what are the ethics of algorithmic transformation? When facial recognition “normalises” faces for analysis, when beauty filters “enhance” features, when glitches corrupt representations - what’s at stake? Who has the right to transform images of others?

  6. Efficiency and extraction: Functions promise efficiency - write once, use many times. But efficiency for whom? At what cost? When does optimisation become extraction? What would it mean to write “inefficient” code that prioritises understanding over speed?

To deepen your thinking about functions, abstraction, images, and glitch:

Artists working with glitch and image corruption:

Artists working with images, race, and technology:

  • American Artist - Black Gooey Universe (2018), interface critique, renaming as resistance
  • Sondra Perry - IT’S IN THE GAME (2017), Blackness and digital rendering
  • Zach Blas - Facial Weaponization Suite (2012-14), anti-surveillance masks
  • Simone Browne - Dark Matters (2015) - surveillance and Blackness
  • Joy Buolamwini - Algorithmic Justice League, bias in facial recognition

Artists working with algorithmic image making:

  • Anna Ridler - Training sets, datasets as artistic material, Mosaic Virus (2018)
  • Trevor Paglen - ImageNet Roulette (2019), machine vision critique
  • Hito Steyerl - How Not to Be Seen (2013), poor images, digital circulation
  • James Bridle - Algorithmic culture, Autonomous Trap 001 (2017)

Key texts:



Create a sketch that manipulates images at the pixel level.

You might explore or experiment with:

  • What happens when you treat images as raw data rather than pictures?
  • How can corruption or error reveal underlying structures?
  • What does it mean to algorithmically process images that depict people, bodies, faces?
  • How do compression, file formats, and digital storage shape what images can be?
  • What’s the difference between using p5’s functions (tint, filter) vs operating on pixels directly?
  • How can procedural generation create images without photographic sources?

Technical requirements:

  • Work with images (loaded from files OR procedurally generated using pixel manipulation)
  • Access and manipulate pixel data directly using loadPixels(), pixels[], updatePixels()
  • Write at least 1 custom function that perform image operations
  • Use parameters to make your functions flexible
  • Demonstrate understanding of scope (global vs local variables)
  • Use loops to iterate over pixel data
  • Use conditionals to create different effects based on thresholds or mouse position

You might:

  • Create glitch effects (channel shifting, random corruption, threshold effects)
  • Process images mathematically (inverting, posterising, distorting)
  • Generate procedural images from scratch using pixel manipulation and formulas
  • Make interactive image manipulation controlled by mouse/keyboard
  • Create time-based pixel effects that evolve in draw()

You might also:

  • Write functions with names that reveal usually-hidden assumptions
  • Create functions that document their own operation (functions that explain what they’re hiding while they work)
  • Build redundant functions that repeat rather than abstract (what does meaningful repetition look like in code?)
  • Make functions that refuse to work “efficiently” - that are deliberately slow, verbose, or excessive as a way of revealing computational cost
  • Create a “processing pipeline” where each function performs one small transformation, making the steps explicit
  1. Your sketch (link to p5.js web editor as usual, well-commented explaining your functions and what they do)

  2. A reflection You reflection should include your approach to glitching, what glitching means to you, or how you see glitching as a form of resistance, or experimentation. All while considering what functions mean to you? Is it abstraction, or obfuscation? Is it a way to hide, or reveal? Is it a way to control, or resist? Is it a way to understand, or to question?


Next week, we’ll explore events and interaction with sound using p5.sound. We’ll think about response, agency, listening, and being listened to. We’ll ask: who has agency in interactive systems? What does it mean for code to respond? How does sound make temporal interaction perceptible in new ways? What’s the difference between listening and being listened to?

But before we get there, really sit with functions. Feel how they shape your thinking. Notice what they make easy and what they make hard. Question every abstraction. Build your own. Write code for people, not just for computers. Think about how you can read your code out loud to others. How to perform code?

Functions are one of the most fundamental structures in programming - and in systems of power. Understanding them deeply means understanding something about legibility, access, control, and resistance. These aren’t abstract concepts. They’re how the world is organised, who gets to see what, who gets to do what, who gets to understand and who is kept in the dark.