Make Tips

I tend to use make as a task runner, a purpose it is not expressly suited to, but works well enough.

This site’s own makefile.

Note I almost always use and build for GNU Make, which is different in some ways than other makes.


See the intro section of the GNU Make manual. And this page is a good intro and reference.

Basic format:

# v-- filename of the thing the recipe builds or an action/task name
target: prerequisites ...
#         ^-- other targets to build before this one
# v-- must start with a tab character, no spaces here
    recipe # <-- lines to execute

Each line execute in a separate shell, so you can not set variables inside a target. Chain commands together with && or ;, e.g.,

    a_command && b_command # execute in same shell
    a_command; b_command # also executes in same shell

If you have a long line, use backslashes to break it up across lines.

    command thing $(SOME_VAR) \
    --a-flag \
    --another-flag \

@ at the beginning of a line suppresses make echoing the command.

    echo "Hello, world"
$ make echo
echo "Hello, world"
"Hello, world"


    @echo "Hello, world"
$ make echo
"Hello, world"

- at the beginning of a line suppresses errors on that line from killing target build.

    -rm *.o

Finishes successfully even if the rm errors (like if there are no .o files to clean, for this specific case you’d probably just use the -f on rm itself, but this is just an example).

You can combine them:

    @-rm *.o

Doesn’t say what it’s doing or care if it errors.

If make is called without a target, it will run the first target listed in the makefile, unless the .DEFAULT_GOAL variable is set, then whatever that says gets run.

Capitalize makefile or not

It doesn’t really matter. Makefile is conventional.

Note that GNU make searches for makefile before Makefile, so if both are present, the lowercase one wins.

This can be an interesting feature in a shared project. For instance, if you have a version tracked/shared Makefile, then a local makefile that does an include Makefile to pull in the shared one, you can then add personal targets to your makefile, things that are only useful for you or that you are giving a trial run for a while.

This is also potentially dangerous, mixing local modifications in the same interface, safer to have a differently named, like a make.local that you have to make -f make.local <target>, but sometimes it’s nice to have your tweaks presented in the same interface.

I prefer lowercase makefile as it’s ever so slightly easier to type and it’s usually surrounded by other lowercased names so it looks better to me to match.

Make as a task runner

Often in projects you have a number of commands that you may wish to run on occasion. There are many existing task runner solutions out thereThe JavaScript world in particular seems to love to create them.

. I shy away from these as I think often they are overcomplicated for the task at hand.

If a project has a JavaScript dependency, often it will use the scripts section in it’s package.json file, which can easily be run with npm run <script name>. I dislike this for a few reasons:

  1. I avoid JavaScript dependencies as much as possible
  2. I generally want a package manager to be responsible for one thing, getting packages
  3. I prefer to have npm scripts disabled globally as they are a security hazard
  4. Can’t have comments (because JSON)

Similar reasons apply to pipenv scripts or whatever, especially point two.

If you already have a build system, e.g., rake, Ant/Maven/Gradle, SCons, etc., maybe it’s easier to stuff tasks in there, but even then I think a simple makefile might have room for high level project management stuff.

Advantages of make:

  • Been around forever
  • Available everywhere and probably already installed
  • Just let’s you write shell scripts
  • Independent of your project specific language environment, a couple advantages to this:
    • A more stable tool, language specific tooling usually grows and changes a lot more than make will, so it can paper over those changes. It’s also divorced from you package management, so less churn, easier to track history, etc.
    • A common interface between different projects (which are all potentially written in a variety of different languages). You can build up common targets and practices around make that can be shared between all your projects, providing a consistent interface when bouncing between projects.

Disadvantages of make:

  • Old and thus not purposely designed for today’s environment
  • It’s ways and appearances can be odd to the uninitiated, or to the initiated that have been away for a time

For me, a simple makefile strikes a good balance for many projects.

My suggested approach is this:

  1. At the beginning, use a simple makefile for common tasks. Whenever you find yourself needing to run some command (or sequence of commands) more than once, stick it in your makefile.
  2. For bigger tasks, write a standalone script, store it in a scripts/ directory and call the script in the makefile.
  3. When you get to the point where you have lots of scripts or some complicated behavior, write a custom application for managing your project. This might be just a CLI library wrapping some scripts, but it’s very nice having a management interface specifically tuned to your project (and having a proper programming language to build it in). Use a build system library like Shake if needed.

Shell tab-complete

Many shells support autocomplete for make targets, zsh ships with support by default, there is often a bash-completion package for your system which will enable it in bash.

This is pretty handy, to be able to just type make then hit Tab to see a list of targets and have tab-completion for finishing target names.


There are two flavors of variables in make. In short, variables defined with = are recursively expanded and their value is evaluated every time they are referenced, which has performance consequences if it’s defined in terms of functions.

The other kind of variables are defined with := which are simply expanded variables that get evaluated only once, when make first encounters them. These behave much more understandably and should be preferred by default.

There’s also another form, ?= which sets a variable only if it hasn’t been defined, which can be useful to define defaults for variables you may want to inherit from the environment. More on that in passing arguments.

Passing arguments to make targets

Okay, so there are basically two ways to get values to a target.

From environment

All environment variablesExcept MAKE and SHELL.

become make variables inside the makefile. This is probably the most common way to modify makes behavior. If you were to do:

THING=val OTHER_THING=val2 make thing

$(THING) and $(OTHER_THING) would be available in the makefile.

The important thing about inheriting environment variables is that they are only inherited if they are not set in the makefileUnless you set -e or use the override directive on the variable.

. For instance, if our makefile was:

msg = hello

    @echo $(msg)

And then try:

$ msg=world make echo

The output is hello, not world.

This is where the ?= variable definition comes in.

If we had instead written our makefile like:

msg ?= hello

    @echo $(msg)


$ msg=world make echo

So in short, define variables with ?= if you want to enable the variables to be overridden by an environment variable (which is usually what you want).

As arguments to make

You can also set make variables by passing them as arguments to make. They can come before or after the target name.

make THING=val OTHER_THING=val2 thing
make thing THING=val OTHER_THING=val2

Unlike environment variables, it doesn’t matter how these variables are defined in the makefile, their value is what is set in the argument, any definitions for them are ignored in the makefile.

Arbitrary arguments

Okay so setting arguments is cool, but what if you want to pass arbitrary arguments to a target? Well, just use a general variable name, like args.

Write your target like:

    ls $(args)

Which you could then call like:

$ make ls args='-la ~'
# ls -la ~

You can even set default arguments on a per-target basis, say:

ls: args=~/
    ls $(args)

run: args=--flag -t file
    command $(args)

But if you really don’t want to have to type args="", you can bend and contort to support passing the other arguments to make directly to a target. Reproduced from this StackOverflow answer.

Write your makefile like:

# If the first argument is "run"...
ifeq (run,$(firstword $(MAKECMDGOALS)))
  # use the rest as arguments for "run"
  RUN_ARGS := $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
  # ...and turn them into do-nothing targets
  $(eval $(RUN_ARGS):;@:)

prog: # ...
    # ...

.PHONY: run
run: prog
    @echo prog $(RUN_ARGS)

You could then do:

$ make run foo bar baz
prog foo bar baz

If you want to pass options to the target, you do need to separate them from make with a -- (like other commands), so something like:

make run -- --from here --to there

Generally, if you do need to pass arbitrary arguments to a target all the time, I would suggest writing a script and running it directly, maybe with make targets for the common cases. But you do you.

User defined functions

There are a variety of built-in functions, but it’s often useful to define your own reusable block of code.

You can define “functions” with the define directive, like so:

define run_thing
    do_thing_with_first_arg $(1)
    do_another_thing_with_other_arg $(2)

The define directive is just a way to build up a multiline variable, so what we are really doing is creating a variable. Using a variable to store commands is also called a canned recipe.

Then to use it, pass it to the call function in the form of $(call <name_of_function>[,param][,param][,…]), like so:

$(call run_thing,foo,bar)

Which would execute:

do_thing_with_first_arg foo
do_another_thing_with_other_arg bar

More concretely, say you wanted to print a test coverage report after every run of the test suite, as for a Python project using pytest and

define run_tests
    poetry run coverage run --source src -m pytest $(1)
    poetry run coverage report

test: ## Run all tests
    $(call run_tests,.)

# say this project's convention is to mark tests that are integration tests, so
# therefore, to run the unit tests, we want to run everything *not* marked as an
# integration test
test-unit: ## Run just unit tests
    $(call run_tests,-m "not integration")

Notably the call function allows us to parameterize the definition (the $(1) and $(2) above), which is usually what I need, but if you just want to collect a few lines that need run as-is in a few places, can just use the function as a regular variable, e.g., $(run_thing).

Auto-documented makefiles

This post describes the approach in detail and is quite handy. In sort, annotate your targets like:

deps: ## Install project dependencies

Add the magic incantation:

help: ## Displays this help screen
    @grep -Eh '^[[:print:]]+:.*?##' $(MAKEFILE_LIST) | \
    sort -d | \
    awk -F':.*?## ' '{printf "\033[36m%s\033[0m\t%s\n", $$1, $$2}' | \
    column -ts "$$(printf '\t')"

And then when you run make help you get a nicely formatted help page.

Quick line-by-line breakdown, first grep the makefile(s) for our special comments:

┌ tell make not to print the line, we are only interested in the output of the command
|                  ┌ the regex to match your comment pattern
|                  |                    ┌ automatic variable, holds all filenames that have been parsed for make rules on this invocation
|         ┌--------+----------┐ ┌-------+------┐
@grep -Eh '^[[:print:]]+:.*?##' $(MAKEFILE_LIST) | \
       |└ don't print filenames, for when multiple makefiles are parsed (say by including another)
       └ extended regex support

Then we sort the lines grep returns:

sort -d | \

Then add some color to the target name, which means: split the line into it’s parts, color the name, then recombine:

         ┌ split the input based on our comment pattern
         |         set color ┐          ┌ reset color
    ┌----+----┐           ┌--+---┐  ┌--+--┐
awk -F':.*?## ' '{printf "\033[36m%s\033[0m\t%s\n", $$1, $$2}' | \
                                  └┘       └┘└┘
                      target name ┘        |  └ comment/help text
                                           └ separator character (TAB in this case)

Then nicely align everything:

        ┌ table mode, so columns and rows fill correctly
        |┌ set separator character between columns
column -ts "$$(printf '\t')"
            |      └ get a literal tab character
            └ escape the $ for the tab character since we are in make

I use a tab character to separate the target and help text as it’s unlikely to be used in the help text, feel free to chose a different one, just keep it in sync between the awk and column.

You can of course go simpler, something like:

    grep -E "^[[:print:]]+: ##" [Mm]akefile

Which will show the targets in the order they are in the makefile, even highlighting the target name (and other comment pattern junk).

Or with straight awkSourced from

@awk 'BEGIN {FS = ":.*?## "} /^[a-zA-Z0-9_-]+:.*?## / {gsub("\\\\n",sprintf("\n%22c",""), $$2);printf "\033[36m%-20s\033[0m %s\n", $$1, $$2}' $(MAKEFILE_LIST)

Or use a different pattern:

# target: help - Display callable targets.
    @grep -E "^# target:" [Mm]akefile

# target: list - List source files
    # Won't work. Each command is in separate shell
    cd src

    # Correct, continuation of the same shell
    cd src; \

# target: dist - Make a release.
    tar -cf  $(RELEASE_DIR)/$(RELEASE_FILE) && \
    gzip -9  $(RELEASE_DIR)/$(RELEASE_FILE).tar

Ultimately, you’re just grepping some files, do whatever works for you.

When order matters

The prerequisites of a target are not guaranteed to be run in any particular order. For instance:

thing: do-one do-two do-three

So the do-{one,two,three} targets will get run before thing does, but the order they get run in not deterministic, do-one is not guaranteed to run before do-two and so onThis is ignoring what prerequisites the do- targets might have for the moment


Now they often do get run in the order you list them and for simple, small things, you can usually get by, you just have to be careful the targets are not a part of any other targets you might want to run with -j.

But sometimes this matters, sometimes you really want to run a series of other make targets in a guaranteed order. For this you have to reach for recursive make.

WARNING: Recursive make can be a bit of a gotcha, and should be avoided if possible in simple cases. For instance, variables that are not marked for export are not automatically inherited to the sub-make. Read up before you start down the recursive path.

    $(MAKE) do-one
    $(MAKE) do-two
    $(MAKE) do-three

$(MAKE) is just a variable that is set to the same make command that is currently running.

Calling make inside of a make target is traditionally used to call other makefiles in sub-directories to build smaller components of a system and so make will print some extra information about what directory it’s running in when called recursively.

For task-runner purposes this can be a little annoying. GNU make has a easy flag to turn this off though, so I’d recommend setting a special variable with the flag and use it, like:

# (q)uiet (make), use whatever you want
QMAKE := $(MAKE) --no-print-directory

    $(QMAKE) do-one
    $(QMAKE) do-two
    $(QMAKE) do-three

If you want to quiet if for every recursive make call, then you could just add it to MAKEFLAGS like:

MAKEFLAGS += --no-print-directory

    $(MAKE) do-one
    $(MAKE) do-two
    $(MAKE) do-three

And achieve the same thing.

For non-GNU make you can set the -s flag instead, but that silences all output, which is usually less desirable.

If you can manage to structure your targets just so, then you can enforce some ordering with just prerequisites, like:

do-two: do-one
do-three: do-two

# technically would only need to say do-three
thing: do-one do-two do-three

But often you can’t structure your targets like this.

Also, there are things called order-only prerequisites, but don’t let that name fool you, they do not get run in a specified order, they are just prerequisites that are always considered out of date and so are always run (which sometimes matters).

Running other targets

Our old friend recursive make.

    gcc -Wall $(args)

    $(MAKE) build args='-O0'

    $(MAKE) build args='-O9'

Though sometimes it might be better to pull out the common bits into a variable, and not call make, like:

BUILD_CMD := gcc -Wall

    $(BUILD_CMD) $(args)

    $(BUILD_CMD) -Og

    $(BUILD_CMD) -O2

But sometimes your other targets you want to call, have certain prerequisites setup that you don’t want to have to duplicate and other stuff. So sometimes it is easier to use recursive make.


By default, regardless of what your personal shell is, make targets run under /bin/sh. You can change this by explicitly setting the SHELL variable either globally in the makefile or for a specific target like:

clean: SHELL:=bash
    rm $(SOME_DIR)/{one,two,three}/*.junk

GNU vs BSD make

There are other makes out there, but generally I only think about GNU and BSD variants. As mentioned at the top, I almost always write for GNU make specifically.

If you want to more portable makefiles, avoid using anything listed on the features and missing pages for GNU make.

I don’t recommend it, but you can also maintain separate GNUmakefile and BSDmakefile files and the different makes will pick the right one automaticallyGNU make for example looks for GNUmakefile, makefile, and Makefile, in that order, unless told to read a specific file with -f/--file.

. For stuff that isn’t version specific, you can put it in a makefile.common (or whatever), then include makefile.common in each of the others to easy the maintenance burden.

Actually building things

Make is first and foremost, a build system. Designed to take source files, do something with them to generate other files (like an executable). So if you actually do have file dependencies, make has some nice features.

Take a super simple example.

    pandoc --from markdown --to html --output test.html

This is a rule that knows how to build the test.html file from its markdown source file, We can run make test.html to build it.

The nice thing is make only does the actual work if it needs to by comparing the timestamps for the prerequisites and targets, if the prerequisite has been modified after the currently generated targets timestamp, then it knows it should rerun the recipe, otherwise it doesn’t. Which you can see if you run make test.html again, make will say there is no work to do.

But we are repeating ourselves a bit, make knows the target we intend to build and the source inputs before it goes to run the recipe and has special variables we can use for them.

    pandoc $< --from markdown --to html --output $@

$< is the expanded prerequisite name ( in this case). $@ is the expanded target name (test.html in this case).

Find more automatic variables in the docs.

Better. But we can generalize a little more, say if we have a bunch of markdown files, maybe documentation stuff, we can use a pattern rule:

    pandoc $< --from markdown --to html --output $@

This is a rule that knows how to build any html file from its corresponding markdown source file. So we can just say make test.html or make other.html and it will generate them if needed.

Okay, but we don’t want to have to run a bunch of make <file>.html commands in order to get all our html files generated. Let’s write a rule that has as prerequisites all the html files, so when we run that target it will trigger the build of all the html files.

html_files := test.html \

all: $(html_files)

    pandoc $< --from markdown --to html --output $@

So we’ve added a list of the html files in the variable html_files and added it as prerequisites to the target all, so when we run make all, all the html files will be generated.

But it could be tedious to maintain the list of html files by hand, adding a new line every time we create a new markdown file. Let’s automate that.

md_files  := $(wildcard *.md)
html_files := $(patsubst,%.html,$(md_files))

all: $(html_files)

    pandoc $< --from markdown --to html --output $@

So we’ve used the wildcard function to get a list of all the markdown files in the directory. We then swap their file extention with the patsubst function. Everything else is the same as before.

We can now just add markdown files as we wish, make all will build or rebuild them as necessary.

Another wonderful feature of make, is it’s parallel building, through the -j/--jobs option. You sometimes need to write your rules with parallel building in mind to get the full benefit of it, but as we’ve written our rules here, we can run say make all -j 5 to build five files at once, a potentially big time saver if you have the compute capacity.

This could make our directory a little messy, let’s add a clean target to delete the generated files.

md_files  := $(wildcard *.md)
html_files := $(patsubst,%.html,$(md_files))

all: $(html_files)

    -rm $(html_files)

    pandoc $< --from markdown --to html --output $@

Now we can run make clean to remove all those html files, if we want to force regenerating all of them from scratch, or we just don’t need them anymore.

This is just a simple example, but hopefully illustrative of the how we can take advantage of some of make’s features to build a simple and powerful tool.


You can import other makefiles with the include <file> feature, which means it is possible to develop “libraries” of useful things you can copy around/submodule/fetch from wherever.

Some existing ones to use or borrow from:

Bigger example

Extracted from a old, but real project.

.PHONY: build check-elm clean clean-elm deps distclean install

BUILDDIR ?= '_build'
DESTDIR ?= '../server/priv/static'

build: $(addprefix $(BUILDDIR)/, elm.js index.html)
build: ## Builds application into $(BUILDDIR)
    cp -r assets $(BUILDDIR)/
    cp -r node_modules/uswds/dist/* $(BUILDDIR)/assets/

build-prod: build
build-prod: ## Builds application in $(BUILDDIR), w/ optimizations
    mv $(BUILDDIR)/elm.js $(BUILDDIR)/elm-unoptimized.js
    closure-compiler --js $(BUILDDIR)/elm-unoptimized.js --js_output_file $(BUILDDIR)/elm.js --compilation_level SIMPLE
    rm $(BUILDDIR)/elm-unoptimized.js

install: ## Puts build files into $(DESTDIR)
    mkdir -p $(DESTDIR)
    cp -r $(BUILDDIR)/* $(DESTDIR)

deps: ## Installs project dependencies
    npm install
    elm package install --yes
    cd tests && elm package install --yes

clean: clean-elm
clean: ## Deletes build artifacts
    rm -r $(BUILDDIR)

clean-elm: ## Deletes Elm build artifacts
    rm -r elm-stuff/build-artifacts

distclean: clean
distclean: ## Deletes all non-source code stuff
    rm -r elm-stuff node_modules

check-elm: clean-elm $(BUILDDIR)/elm.js
check-elm: ## Rebuilds application for warnings


ELM_FILES := $(shell find src/ -type f -name '*.elm')
$(BUILDDIR)/elm.js: $(ELM_FILES) | $$(@D)/.
    elm make --warn src/Main.elm --output $@

$(BUILDDIR)/index.html: index.html | $$(@D)/.
    cp index.html $@

# Trick to make it easy to ensure a target's parent directories are created
# before we try to use them, just add a `| $$(@D)/.` to the dependencies of a
# target to ensure the directories in it's path are created
    mkdir -p $@

# Self-documenting make file
# (
help: ## Displays this help screen
    @grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-10s\033[0m %s\n", $$1, $$2}'
    @echo ""
    @echo "BUILDDIR: $(BUILDDIR)"
    @echo "DESTDIR: $(DESTDIR)"