Deep dive into package.json scripts

How do package.json scripts work & everything you need to know on how to get you started

If you've ever worked on a project that includes any sort of Javascript chances are there is a package.json included in the project. This package.json file is a very intimidating one when starting out and you're very scared to touch it in the fear you will break something.

This article is the first in a series of deep dives into package.json fields, how they work, what they do and how you can use them to your advantage and make your project even better, make you feel comfortable to play around with package.json fields, allow you to create a CLI and more!

This article covers the "scripts" section and by the end, you will understand how to write your very own scripts in JS & TS (both CJS & ESM).

So without further ado let's dive right into the scripts part of our good old friend package.json!

How does npm work?

Before we get into the package.json part let's go over how npm works and executes scripts for you.

When you install node to your PC it installs npm (node package manager) & npx for you too. It adds them to your PATH and allows your PC to execute npm as an executable. It comes with a bunch of built-in commands such as:

  • run - allows you to run scripts

  • pack - packs a tarball of your project (useful for OSS)

  • publish - publishes your package to NPM registry (useful for OSS)

  • dependencies - runs after any sort of change to node_modules directory

  • ci - allows you to cleanly install dependencies by clearing the node_modules and then running npm install

  • install - installs your package.json dependencies and devDependencies into the node_modules directory

  • uninstall - removes a depedency/devDependency from your project

And a bunch more commands, if you're interested in the whole list and want to go deeper you can find the documentation document on the following link:

https://docs.npmjs.com/cli/v10/commands

After this, you can use it to execute scripts via npm run <script-name> where the script-name matches a script in your package.json file.

If you want to add global executables you can by installing a package as a global dependency, for example, if you want a package that injects .env variables available globally you can run:

$ npm install dotenv -g

This will add it to your global npm dependencies so if you try to execute dotenv anywhere on your pc it will work, or if you have a script in your package.json like so:

{
  "env": "dotenv -e .env && npm run something"
}

It will work because you have it installed globally, but in many cases, this is not needed, for example, if you do not want to install it globally you can add this dependency to your package.json under devDependencies and npm is smart enough to first look into your node_modules for the executable, if it is not there it will then look globally, so by having this in your package.json:

{
  "scripts": {
    // this works fine now even if not globally available!
    "env": "dotenv -e .env && npm run something"
  },
  "devDependencies": {
    // After running npm install this executable is in your node_modules
    "dotenv": "*"
  }
}

And lastly, another trick you can use is to use npx. It will install the executable before running the command like so:

{
  "scripts": {
    // this works because npx will install the dependency and run it even
    // if you do not have it on your PC globally or locally in the dev deps
    "env": "npx dotenv -e .env && npm run something"
  }, 
}

But this shouldn't be done in most cases because you can get version mismatches and other issues! Just because it worked a month ago you are not guaranteed that it will work today if the versions change.

If you're wondering what the && is for it allows us to run multiple scripts one after another, if you wish to add a script that runs multiple scripts in parallel there is a great package that you can look at for this use-case:

https://www.npmjs.com/package/npm-run-all

One last thing before we transition to scripts is understanding how flags and parameters work in npm.

If you have seen something like npm run test -- --watch you probably wondered what is going on here. Well, in short, you can pass parameters to any script and there are two main ways of doing this.

The first one is that you add a special -- delimiter that tells npm that it should pass everything that comes after it to the script that is being run, so from our case above, if our script is something like:

$ npm run test -- --watch

Assuming our test script is for example:

"test": "vitest"

This will result in the --watch flag being passed down to the script and the final script that is going to be run is:

vitest --watch

Another way to pass parameters is by passing them WITHOUT the trailing - or --, for example:

$ npm run test scripts/run-me.js

this will result in the command:

vitest scripts/run-me.js

So the main two ways to pass parameters are either by adding -- before the parameters which is explicitly telling npm to pass them along or by excluding the prefixes entirely.

Here is a list of commands and what they result in for your better understanding:

$ npm run grunt -- task:target  // invokes `grunt task:target`
$ npm run server -- --port=1337 // invokes `node server.js --port=1337`
$ npm run test foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js', 'foobar']

$ npm run test -foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js']

$ npm run test --foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js']

$ npm run test -- foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js', 'foobar']

$ npm run test -- -foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js', '-foobar']

$ npm run test -- --foobar
['C:\\Program Files\\nodejs\\node.exe', 'C:\\git\\myrepo\\test.js', '--foobar']

If you want to understand this even more you can look at this SO issue that explains it in detail and adds additional links for more investigation: https://stackoverflow.com/questions/11580961/sending-command-line-arguments-to-npm-script

Types of scripts

So before we go too deep into scripts and what they can and how they work let's first understand the types of scripts there are! I will classify all scripts into two types; external and internal scripts.

External scripts

Under external scripts, I will group everything that is executed by a third-party package and not by your hand-made scripts. These will probably make the majority of your package.json scripts because a lot of scripts rely on third-party packages like Jest, Prisma, Vitest etc.

There are also special kinds of scripts run with npx that fall into this category. If you're not aware npx allows you to execute any script (not just ones from npm) without having to install the script, so what you can do with npx is:

  • Run github gists/scripts

  • Run npm packages without installing dependencies (eg. running npx prisma commands in your build pipeline without having it in your dependencies)

  • Run local scripts that are located in your $PATH variable

Another cool thing about npx is that it installs the dependencies of the script temporarily and executes them instead of keeping them around.

It also comes bundled with npm and is the de facto standard of running external scripts that you do not want to have locally installed

These scripts are external and are usually bundled as executables on npm, but as I mentioned above, they are not limited to npm due to the fact you can run scripts from anywhere using npx

Internal scripts

Now these are the in-house scripts that you create for yourself to make your life easier and boost your productivity. These can include anything from generating code snippets, injecting code, checking the validity of some entities, generating icons, optimizing images and the list goes on!

These scripts are usually located in the project directory itself and are bound to the current project as they are context-specific, eg. if you're generating icons from an input file located in /public to somewhere in your app, eg. /app/icons then this might not be a useful script in another project because that project doesn't have the /public or app/icons directories.

Towards the end of the article we will be building our own setup script!

Script keywords

Now that we went over the two kinds of scripts let's go over some special script keywords you need to be aware of!

Pre scripts

These are special scripts that start with the pre keyword in the package.json scripts, eg:

{
  "predo-something": "do something before the `do-something` script"
  "do-something": "your script"
}

These scripts are meant to be executed BEFORE the script you specify after the pre keyword, you can add this to any script in your package.json and it will execute before the script.

If you have a db:seed:prod script and want to do a check before you seed you can add a predb:seed:prod script to execute something before the actual script executes.

The important thing to note is that you can run anything before another script by just specifying pre<script-name> and it will be executed before.

Post scripts

The idea is pretty much the same as the pre scripts but these scripts execute AFTER the script has run, usually these scripts are used for cleanup or validation, eg:

{
  "postdo-something": "do something after the `do-something` script"
  "do-something": "your script"
}

As you can see the approach is the same as the pre script, the only difference is that you add post<script-name> instead of the pre keyword

Lifecycle scripts

As I mentioned above npm comes with a lot of installed commands by default, there are special lifecycle scripts that are executed in certain conditions when you run one of these npm commands such as:

  • prepare - Runs BEFORE the package is packed, i.e. during npm publish and npm pack

  • prepublish - Does not run during npm publish, but does run during npm ci and npm install. This command is deprecated for prepublishOnly and you should use that one instead though!

  • prepublishOnly - Runs BEFORE the package is prepared and packed, ONLY on npm publish.

  • prepack - Runs BEFORE a tarball is packed

  • postpack - Runs AFTER the tarball has been generated but before it is moved to its final destination

  • dependencies - Runs AFTER any operations that modify the node_modules directory IF changes occurred.

Detailed info on all of these can be found here: https://docs.npmjs.com/cli/v10/using-npm/scripts#life-cycle-scripts

These scripts are not that useful if you're not building a OSS project as they are mostly used in those scenarios but they can have their use cases in normal projects as well!

Writing custom scripts

Alright so now that we have a firm grasp of how scripts work let us explore how we can write our scripts to do some cool stuff. I will be writing both a js/ts version in this guide so you can fully understand what the differences are and what you need to do in each scenario.

In this section, I will write a script that does the following things:

  • Takes in an environment argument and loads the correct .env file so you can run scripts in whatever .env you wish.

  • Add a confirm prompt so you have to confirm you want to run a script before you execute it

What this script basically does is that it allows you to set up the .env for another script before it is executed! Let's start with the common setup first.

Setup

The first thing we want to do is create a repository with a package.json file. You can create an empty directory on your PC and then inside it run npm init and go through the setup, whatever you select doesn't really matter.

After we have this we create a /scripts directory on the root of the newly created application, so if you call it something like test-app the path should be test-app/scripts.

Now depending on if you're going to follow in JS or TS create either a setup.js or a setup.ts file inside of the /scripts directory. Also, create a test.js or test.ts in the /scripts as well.

For the testing of these commands let's add .env and .env.prod

// Add this in the .env file:
DATABASE_URL="development"
// Add this in the .env.prod file:
DATABASE_URL="production"

After that I want you to run the following inside your app:

npm install -D dotenv prompt chalk@4.0.0

This will install the devDependencies we need, the -D flag is a shortcut for --save-dev. Chalk went to native ESM bundling from version 5 so if you're using CJS you need to install a version before 5, so we installed version 4 for this.

This is optional if you don't want to make your output prettier you don't have to add chalk! It just adds colors to the console output.

Dotenv is a popular package that we will use to inject the .env into our process and prompt is a simple library used to prompt the user with a question in the terminal.

Now you can paste the following script into the file and then we can go over what exactly it does:

import { spawn } from "child_process";
import prompt from "prompt";
import dotenv from "dotenv";
import chalk from "chalk";
// add all the env you wish here
const ENVIRONMENTS = ["stage", "prod", "test"];

const getEnvInfo = () => {
  // Gets the environment from the command line arguments if set, otherwise defaults to dev
  const env = process.argv.find((arg) => ENVIRONMENTS.includes(arg)) ?? "";
  // Sets the environment name to be console logged for info
  const envName = env !== "" ? env : "dev";
  // Allows for reading from .env .env.prod .env.stage etc
  const path = `.env${env ? `.${env}` : ""}`;
  return { env, envName, path };
};

// Helper method used to confirm the run
const confirmRun = async () => {
  const { envName } = getEnvInfo();
  console.log(
    `About to execute the command in ${chalk.bold.red(envName)} environment.`
  );

  const { sure } = await prompt.get([
    {
      name: "sure",
      description: "Are you sure? (y/n)",
      type: "string",
      required: true,
    },
  ]);

  if (sure !== "y") {
    console.log(chalk.bold.red("Command aborted!\n"));
    process.exit(1);
  }
};

const setupEnv = () => {
  const { envName, path } = getEnvInfo();
  console.log(chalk.green(`Loading environment: ${envName}`));
  dotenv.config({ path });
  console.log(
    `Environment loaded: ${chalk.green(envName)} from ${chalk.green(path)}`
  );
};

if (!process.argv[2]) {
  chalk.red("Missing command to run argument");
  process.exit(1);
}
// Injects .env variables into the process
setupEnv();

// Main command to run
const main = () => {
  // Allows us to run scripts from the scripts folder without having to wrap them in package.json with npm run execute
  const command = process.argv[2].startsWith("scripts/")
    ? `npm run execute ${process.argv[2]}`
    : process.argv[2];
  // Filter out the script command and the environment (the slice(3) part) and remove our custom args and pass everything else down
  const filteredArgs = process.argv
    .slice(3)
    .filter((arg) => !ENVIRONMENTS.includes(arg) && arg !== "confirm");
  // Spawns a child process with the command to run
  // param 1 - command to run
  // param 2 - arguments to pass to the command
  // param 3 - options for the child process
  const child = spawn(command, filteredArgs, {
    cwd: process.cwd(),
    stdio: "inherit",
    shell: true,
  });
  // If the child process exits, exit the parent process too if the exit code is not 0
  child.on("exit", (exitCode) => {
    if (exitCode !== 0) {
      process.exit(exitCode ?? 1);
    }
  });
  //
  ["SIGINT", "SIGTERM"].forEach((signal) => {
    process.on(signal, () => {
      // Kills the child only if it is still connected and alive
      if (child.connected) {
        child.kill(child.pid);
      }
      process.exit(1);
    });
  });
};
// Makes the user confirm the run if the confirm argument is passed
if (process.argv.includes("confirm")) {
  confirmRun()
    .then(() => {
      main();
    })
    .catch(() => process.exit(1));

  // If the confirm argument is not passed, just run the command
} else {
  main();
}

Let's go one step at a time and explain each part of the script. The first thing you need to keep in mind is that script execution goes from top to bottom and executes all the code inside of the file it can execute, so if you want to execute code you either keep it on the top level of your file or if you want to conditionally execute something you wrap it in a method and then call that method.

The first part of the code is simple enough, we just define an environment constant that holds all our possible environments, you can extend this to your liking:

// add all the env you wish here
const ENVIRONMENTS = ["stage", "prod", "test"];

After that, we have our get environment utility that we use in our other functions to get some info we need:

const getEnvInfo = () => {
  // Gets the environment from the command line arguments if set, otherwise defaults to dev
  const env = process.argv.find((arg) => ENVIRONMENTS.includes(arg)) ?? "";
  // Sets the environment name to be console logged for info
  const envName = env !== "" ? env : "dev";
  // Allows for reading from .env .env.prod .env.stage etc
  const path = `.env${env ? `.${env}` : ""}`;
  return { env, envName, path };
};

We first extract the env argument from the script by trying to find it in the process.argv (this will hold all your arguments you pass to the script, the first two places are reserved but it is an array of arguments passed in essentially).

If you want to learn more about node process and how it works and the APIs it exposes you can find the documentation here:

https://nodejs.org/api/process.html

If we find it we create the env name so we don't console log undefined or empty string and then we define the path to the env, eg if the argument is "stage" that will set the path to .env.stage, this part you can change to whatever your configuration is.

Then after that, we have the confirmRun function that we use at the bottom of the script conditionally to either prompt the user that he is sure he wants to run the script and it either exits the script or continues the execution.

const confirmRun = async () => {
  const { envName } = getEnvInfo();
  // prompts the user where the script will be executed
  console.log(
    `About to execute the command in ${chalk.bold.red(envName)} environment.`
  );
  // we prompt the user to confirm the action in the terminal 
  const { sure } = await prompt.get([
    {
      name: "sure",
      description: "Are you sure? (y/n)",
      type: "string",
      required: true,
    },
  ]);
  // if they don't confirm we early exit the process
  if (sure !== "y") {
    console.log(chalk.bold.red("Command aborted!\n"));
    process.exit(1);
  }
};

Finally, for the last of our helper methods, we have the setupEnv utility that runs right after its declaration to inject the .env file contents into the process and this is done using dotenv.

const setupEnv = () => {
  const { envName, path } = getEnvInfo();
  // Tells you which env it's loading
  console.log(chalk.green(`Loading environment: ${envName}`));
  // Injects the .env depending on the path
  dotenv.config({ path });
  // Tells you from where and which env was loaded
  console.log(
    `Environment loaded: ${chalk.green(envName)} from ${chalk.green(path)}`
  );
};

So the next few lines of code are interesting, mainly I am talking about this piece of code:

if (!process.argv[2]) {
  chalk.red("Missing command to run argument");
  process.exit(1);
}
// Injects .env variables into the process
setupEnv();

As I already mentioned, npm puts all your parameters into argv and here we check if the user has provided any parameter, if not we early exit and tell him to do so. If you try running the script and do not provide what to execute it will fail and exit early.

Right after that, we call the setupEnv() to inject the .env file for our script, this is where the part about running everything in your js file I mentioned above comes in, as soon as Node reaches this part it will call the setupEnv function and execute its content.

Finally, we reach the meat of our execution, the main function. Let's go piece by piece and explain what is going on.

 // Allows us to run scripts from the scripts folder without having to wrap them in package.json with npm run execute
  const command = process.argv[2].startsWith("scripts/")
    ? `npm run execute ${process.argv[2]}`
    : process.argv[2];

So as I said earlier process.argv fills in the first 2 positions with information already, and we have a guard if the 3rd argument is missing, so we check if the argument starts with scripts/ and if it does we set the command to be npm run execute <command> otherwise it will just be the command passed in. This allows us to do the following:

$ npm run script scripts/test.js 
// without the above trick this would be:
$ npm run script "npm run execute scripts/test.js"
// if you want to pass in a command with spaces you need to escape it with
// a string, otherwise it would read it as multiple arguments

But this is also flexible and allows us to inject .env files into our third-party scripts, for example, if we wanted to inject .env.stage into vitest we could run:

$ npm run script vitest stage

or maybe we want to use different Prisma databases to see our DB data, we could run

$ npm run script "npx prisma studio" test // starts studio with test db
$ npm run script "npx prisma studio" prod // starts studio with prod db

Now that we have that covered, we want to strip away any arguments that we passed into our setup script, we do this by doing the following:

// Filter out the script command and the environment (the slice(3) part) and remove our custom args and pass everything else down
  const filteredArgs = process.argv
    .slice(3)
    .filter((arg) => !ENVIRONMENTS.includes(arg) && arg !== "confirm");

Because the first 2 spots are filled by node, and the 3rd is our command, we remove those. We also removed "confirm" and "<env>" arguments if we passed them in, and everything else that was left we passed into the underlying script.

For the following code snippet, I won't go into too much detail on how the underlying API works because it's out of the scope of this exercise but it basically spawns a child process on our current process and inherits its console output, sets the directory location to the current working directory (project root) and passes in the command and arguments we prepared from above. Then it adds listeners to exit events for the process and child process to make sure that you gracefully exit and close everything if you terminate the process or it fails:

// Spawns a child process with the command to run
  // param 1 - command to run
  // param 2 - arguments to pass to the command
  // param 3 - options for the child process
  const child = spawn(command, filteredArgs, {
    cwd: process.cwd(),
    stdio: "inherit",
    shell: true,
  });
  // If the child process exits, exit the parent process too if the exit code is not 0
  child.on("exit", (exitCode) => {
    if (exitCode !== 0) {
      process.exit(exitCode ?? 1);
    }
  });
  // Listens to kill event on the process
  ["SIGINT", "SIGTERM"].forEach((signal) => {
    process.on(signal, () => {
      // Kills the child only if it is still connected and alive
      if (child.connected) {
        child.kill(child.pid);
      }
      // exits the process
      process.exit(1);
    });
  });

Last, but not least, we run our main command, but with a catch! If we add the confirm argument to our parameters it will first prompt us to confirm that we want to continue (this is useful if you don't want to accidentally run a script that kills your production DB), if it's not present it will continue and execute the command normally without the prompt:

// Makes the user confirm the run if the confirm argument is passed
if (process.argv.includes("confirm")) {
  confirmRun()
    // runs the process after you confirm it
    .then(() => {
      main();
    })
    // if the function throws we exit as well
    .catch(() => process.exit(1));

  // If the confirm argument is not passed, just run the command
} else {
  main();
}

Alright, we covered everything! Now one last thing before we move onto the js/ts implementations, add this to your package.json:

{
  // Allows us to use import x from "x" inside our scripts
  // If you're adding this to an existing project and it complains that
  // you're missing this in package.json you can just require
  // all the imports inside the scripts we write
  "type": "module", 
}

Alright, let's get to work! First the easy one, Javascript!

Javascript

Assuming you've done the setup above paste the following commands into your package.json:

{
  "scripts": {
    // because our scripts are in JS we use node to execute them
    "execute": "node",
    "script": "npm run execute scripts/setup.js",
    "env": "npm run script scripts/test.js",
    "confirm": "npm run script scripts/test.js confirm"
  }
}

And add this to your test.js:

console.log("DATABASE URL: ", process.env.DATABASE_URL);

So what are we doing here?

Well first we define a execute a script that will be what executes and runs our scripts, in our case it's node, but we will see below it can be something different in the TS example.

Then we add the script utility that will run our setup.ts inside of the /scripts directory that we created earlier, inject our environment variables and ask us to confirm if needed.

After these two are added we can start writing our new scripts that will actually do something on top of this, the scripts I wrote are just to show what will happen so if I run:

npm run env

You will get the following output:

And if you run:

npm run env prod

You will get the following output:

Awesome! We are injecting our env into our scripts! But what if we want to confirm the action before we run the script? Well, that's what confirm parameter is for!

If you run the confirm script like so:

npm run confirm

You get the following prompt:

If you say no:

And if you say yes:

Awesome! There we have it, a completely working setup script! Now let's see how we get this running with typescript

Typescript

So for Typescript, there are a few caveats that you need to understand before we actually jump into the implementation of it. Typescript can't run your code until it's compiled into Javascript, this means that you need to first run it through a process of compilation to get it in JS format to be able to execute it. There are two flavors of TS outputs, CJS and ESM. This article doesn't go into depth about these two topics but you can find an amazing article about it here (written by the one and only Anthony Fu):

https://antfu.me/posts/publish-esm-and-cjs

For the compilation we will be using ts-node, you can find their docs on their GitHub or npm and detailed instructions on how to use it. I will cover both the CJS/ESM variants below. So without further ado let's jump into the TS implementation!

The flow is identical to the Javascript one but there are a few extra steps you need to follow to get it working, all of these are listed below.

Firstly install TS:

$ npm i typescript @types/node @types/prompt -D && npx tsc --init

This will install TS as a dev dependency and initialize a tsconfig.json file for you.

ESM

If you want to run the scripts with ESM you can add the following to your tsconfig.json:

{
  "compilerOptions": { 
    "module": "NodeNext" /* Specify what module code is generated. */,
    "moduleResolution": "NodeNext"
  }
}

And you will need to install the latest version of chalk (5.0.0 or higher).

Then you add the following to your package.json:

"scripts": {
  "execute": "npx ts-node --esm --transpile-only",
}

And you're good to go! Don't also forget to replace any ".js" extension in your scripts with ".ts".

CJS

If you want to run it in a CJS environment you will need the following tsconfig.json

{
  "compilerOptions": {
    "module": "CommonJS" /* Specify what module code is generated. */,
    "moduleResolution": "Node"
  }
}

You will need to remove the following from package.json:

{
  ...
- "type": "module",
  ...
}

Install chalk version 4 (version 5 is ESM only), you can do so by running:

$ npm i -D chalk@4

And finally, add the following to your package.json:

"execute": "npx ts-node --transpile-only",

And we are good to go! We have our scripts working in both CJS & ESM flavors!

Thank you!

If you've made it here you are a real champ! Thank you for sticking through the whole article and going on a deep dive into package.json scripts with me. If you like my content and want to learn more this is the first in a series of articles I plan to write on the deep dive into package.json so be sure to follow me for more!

If you wish to support me follow me on Twitter here:

twitter.com/AlemTuzlak59192

or if you want to follow my work you can do so on GitHub:

github.com/AlemTuzlak

And you can also sign up for the newsletter to get notified whenever I publish something new!