Sunday, October 27, 2024

Recommended Icon Libraries

I use a variety of icons in my projects, and after a lot of churn, I’ve settled on a few free icon libraries that I like, both in how they look, as well as developer experience.

Here’s a quick rundown of the three I recommend, as well as how to use them in React:

Lucide

Lucide is my base icon library. The React library is easy to use (code sample below), and the icon set is well-designed and consistent. I first learned of it through shadcn/ui, and I’ve been using it since.

$ npm install @lucide/react

Here’s how you use it in React:

import { ChevronDown } from "lucide-react";

export default function MyComponent() {
  return <ChevronDown />;
}

The library is tree-shakable, so you can import the icons you need and save bundle space. They look great, and at time of writing, there’s 1500+ icons to choose from.

Simple Icons

Simple Icons is a set of SVG icons for popular brands. They look somewhat similar to Lucide, so they work well in tandem with it.

The primary library has a weird import structure that I don’t like:

import { siSimpleicons } from "simple-icons";
console.log(siSimpleicons);
/*
{
    title: 'Simple Icons',
    slug: 'simpleicons',
    hex: '111111',
    source: 'https://simpleicons.org/',
    svg: '<svg role="img" viewBox="0 0 24 24" xmlns="http://www.w3.org/2000/svg">...</svg>',
    path: 'M12 12v-1.5c-2.484 ...',
    guidelines: 'https://simpleicons.org/styleguide',
    license: {
        type: '...',
        url: 'https://example.com/'
    }
}
*/

I guess you can use the svg property there to render the icon, but I prefer to use the icons-pack package, which is a bit more convenient:

$ npm install @icons-pack/react-simple-icons

This package has an easier to understand import structure, similar to Lucide:

import { SiReact } from '@icons-pack/react-simple-icons';

function BasicExample() {
  return <SiReact color='#61DAFB' size={24} />;
}

Heroicons

Heroicons is an icon set designed by the creators of Tailwind CSS. It explicitly doesn’t have a React library, so it’s better for one-off use cases. To grab an icon, you go to the website and copy the SVG or JSX code. There’s not a ton of icons available, but I like supporting the Tailwind team as I use their framework heavily. This is a minimal option for devs who don’t want to import a whole library.

This icon set doesn’t seem to have many releases (there’s just two listed on the site over a few years), so I’m not 100% sure on the maintenance status of the project. I’ve been using it for a while, and it’s been great.

Tuesday, October 8, 2024

Understanding Astro's getStaticPaths function

getStaticPaths is the secret sauce for one of Astro’s main tricks: generating static pages for dynamic routes.

Imagine we have a blog with three posts (slugs defined below):

  1. hello-corgi
  2. building-chatgpt-for-corgis
  3. addressing-the-haters

Given an index page to list all of our blog posts (/blog), we have four routes in total.

In Astro, we would define two page files in src/pages to handle these:

  1. src/pages/blog.astro - render the /blog page
  2. src/pages/blog/[slug].astro - render a blog post, at /blog/:slug

By default, the blog post page is dynamic. If your Astro app gets a request to /blog/hello-corgi, it will look through the routes it knows about, and generate the blog post page.

That’s great! File-based routing is incredibly easy to implement.

But what about if we want to know about those pages ahead of time? We already have src/content/blog/hello-corgi.md and so on - we know these routes will always exist. Couldn’t we optimize by making those pages static?

This is where the getStaticPaths comes in handy. First, we’ll indicate to Astro that we want this blog post page to be static, by setting the prerender attribute to true. Second, we’ll get all of the blog posts, and export a list of static paths, based on the slug:

---
export const prerender = true
import { getCollection  } from "astro:content";

export async function getStaticPaths() {
  const blogEntries = await getCollection("blog");
  return blogEntries.map((entry) => ({
    params: { slug: entry.slug },
    props: { entry },
  }));
}

// Do other stuff to render the blog post
---

By implementing static paths, the Astro build engine can generate these pages ahead of time.

This gets even cooler when you introduce headless CMS tools like Sanity.io. You can have all of your blog posts live in a CMS, but when it comes time to build the site, you can make an API call at build time, grab X number of posts, and build X number of static pages.

Another place this can be helpful is with building sitemaps. When I first implemented a sitemap for Gangsheet, I noticed it had every page… but my blog posts. Because these were dynamically generated and rendered, they weren’t able to be added onto the sitemap.1 By adding getStaticPaths to my blog post page, and prerendering it, I was able to see it show up on the Gangsheet sitemap.

Footnotes

  1. Most crawlers are pretty smart nowadays. If the blog post is linked somewhere on the site, the average Google/whatever search engine crawler can still render JS apps and grab the URLs as needed. But I’m old-school!

Monday, October 7, 2024

An Introduction to Astro's Content System

Astro is excellent for building blogs and documentation sites.

Over the weekend, I implemented Gangsheet.app’s blog using the built-in content collection system1, and I was impressed at how easy it was. In this blog post, I’ll lay out how I implemented, with real code samples.

Astro’s Content Collection system

Astro has a built-in way to import .md, .mdx, .yaml, and .json files, and use them to generate pages for your site. These files are parsed and strictly typed using Zod.

I’m not tuned in to Astro’s development cycle, but I can speak as a developer who’s been in the static site/Jamstack space for a long time - Astro’s content layer is excellent. In a past life, I completely abused Gatsby’s content generation system to great effect. For instance, generating programatic SEO pages for a frontend development job board by generating massive amounts of combinatorial category pages2.

With that history, I’m pretty well-versed in what the space looks like for using local and remote data to generate static-first sites. The example I lay out in this post - generating a blog - is very straightforward. The number of pages generated is N+1, where N is the number of blog posts (plus one index page). But some of the things that Astro does - no doubt due to better tooling, and experience seeing where some Jamstack sites have fallen short - have me very optimistic that this is a good platform to build on.

Define a content collection

For Gangsheet, I wanted a blog. The posts are authored in Markdown, stored in src/content/blog/*.md. These posts should be parsed, and then rendered at both /blog (the index of all posts), and /blog/:slug (the page for each individual post).

First, we’ll create the folder src/content/blog, and then fill in src/content/config.ts, which configures all content collections for our app:

import { defineCollection, reference, z } from 'astro:content';

const blogCollection = defineCollection({
  type: 'content',
  schema: z.object({
    title: z.string(),
    excerpt: z.string(),
    author: z.object({
      name: z.string(),
      x: z.string(),
    }),
    publishedAt: z.date(),
    related: z.array(reference('blog')),
  }),
});

export const collections = {
  'blog': blogCollection,
};

A blog post has:

  • A title
  • An excerpt
  • An author, with a name and 𝕏 @username
  • A publishedAt date
  • Related posts

A few interesting things to note here:

  1. No slug! The slug is generated automatically for each post, based on the filename (e.g. src/content/blog/hello-world.mdx is, of course, /blog/hello-world).3
  2. Related posts - woah! You can reference other types, or the same type. We’ll see how easy this is when authoring, in just a second.4

Add a blog post

We can generate a blog post by creating a new Markdown file in src/content/blog. I’ll create src/content/hello-world.md, below:

---
title: Hello world!
publishedAt: 2024-10-04
excerpt: My first blog post on my new Astro blog.
author: 
  name: Kristian Freeman
  x: kristianf_
related:
  - the-history-of-hello-world
---

Hello world! This is my new blog post.

Lots of interesting things to dig into here - luckily, it’s all pretty straightforward. title, publishedAt, and excerpt are simple string/date fields. author is an object (maybe a dictionary technically?) with nested fields.

related is a collection of other blog posts, based on the slug parameter (again, defined by the filename). We’ll look at how to access the related blog posts in the blog post page, later on.

Anything after the frontmatter is, of course, the content itself. Astro supports MDX, so you should be able to do fancy React component stuff here, too. I haven’t found a need for that yet, but if you want to see an example of how it works, check out Astro’s “Add reading time” recipe.

Implement page generation

Now, we have a content collection, living at src/content/blog - how do we use it?

First, let’s briefly review Astro’s “page” functionality:

  1. Pages live inside src/pages
  2. Pages use the .astro extension, which executes JavaScript and allows React or other front-end composition
  3. They use file-based routing: src/pages

We’ll create two pages:

  1. src/pages/blog.astro - the blog index.
  2. src/pages/blog/[slug].astro - the template for each individual blog post.

Blog index

Defining the blog index page involves two steps - first, getting the collection using getCollection from the astro:content import. Then, we can render the blog posts using HTML:

import { getCollection } from "astro:content";

const posts = await getCollection("blog");
const sortedPosts = posts.sort(
  (a, b) => b.data.publishedAt.getTime() - a.data.publishedAt.getTime(),
);
---

<div>
  {sortedPosts.map((post) => (
    <div>
      <h2>
        <a href={`/blog/${post.slug}`}>
          {post.data.title}
        </a>
      </h2>
      <p>
        Published at {post.data.publishedAt}
      </p>
      <p>{post.data.excerpt}</p>
    </div>
  ))}
</div>

It won’t be indicated in the above code sample, but each post here is strongly typed. That means that post.slug and post.data, as well as everything inside data, have the benefit of TypeScript magic in your editor. If excerpt, for instance, was optional, we would be encouraged, via our editor and Astro’s build workflow, to handle the null case better.

Post page

import { getEntry, getEntries } from "astro:content";

const { slug } = Astro.params;
if (!slug) return Astro.redirect("/blog");

const post = await getEntry("blog", slug);
if (!post) return Astro.redirect("/blog");

const { Content, headings } = await post.render();
const relatedPosts = await getEntries(post.data.related);
---

<div>
  <article>
    <h1>{post.data.title}</h1>

    <div>
      <time datetime={post.data.publishedAt.toString()}></time>
    </div>

    <div>
      <p>{post.data.author.name}</p>
    </div>

    <div id="content"><Content /></div>
  </article>

  <section>
    <h2>Related Posts</h2>

    {relatedPosts.map((relatedPost) => (
      <div>
        <h3><a href={`/blog/${relatedPost.slug}`}>{relatedPost.data.title}</a></h3>
      </div>
    ))}
  </section>
</div>

First, we grab the slug param from Astro.params. Then we use it to grab the specific post for this page - getEntry(‘blog’, slug). post.render() pushes the Markdown through Astro’s MDX compiler, and returns a Content component that can be rendered on the page, as well as an array representing all the headers (h2, h3, etc.) for the content5.

The rendering is similar to what we did on the index page. post.data contains everything inside of the frontmatter for the post, so you can pull title, excerpt, author, etc. out and reference it wherever you need it in the HTML.

When we need to load related posts, we can call getEntries (note plural, not singular) to load all of the posts specified in post.data.related. We get an array back of related posts - still strictly parsed + typed; basically identical to the posts array we had on the index page. This is super powerful. I love this implementation!

Conclusion

I’m really happy I invested time in learning Astro’s content collection system. I haven’t yet had the chance to use Astro’s new system (in beta), but when Astro v5 is properly released, I’ll do a follow up blog post on what’s changed.

I wrote last week about investing in learning Astro. This continues to pay off. I’ve been able to build a lot of complex functionality in it, and most of the issues I’ve run into have been totally solvable - even the hard stuff, like auth, dynamic data loading, etc.

What I’m excited about most with the content collection system is that it feels like it lives inside of my app, which is quite complex, without resorting to hacks. It fits into the rest of the app in a way that makes sense. For instance, if I wanted to pin the most recent blog post as an “announcement” banner on my dashboard page - I wouldn’t have to make a crazy GraphQL query and combine dynamic and static pages in a way that feels bad. I can just call getCollection(“blog”) on any Astro page and render it out. No hacks needed!

Footnotes

  1. I didn’t use Astro’s new v5 beta, which has apparently rewritten this system. I’m interested to see how it changed - maybe that will be a future post.

  2. Given X number of location categories, Y number of framework/language categories, and Z number of “experience”/skill-level categories, generate X*Y*Z number of SEO-optimized pages, like “senior React.js jobs in the United States”. I was generating thousands of pages and running up against the container my site was building in - with Netlify at the time - running out of memory. Fun times!

  3. You can also manually override the slug in the front-matter of the blog post.

  4. As I’m writing this blog post, I’m realizing that author could be an awesome win here in terms of referencing. Instead of putting the author name/𝕏 username on every post, I could set up “src/content/authors/kristian.md” and just pass that reference in every blog post.

  5. You can use this to generate a table of contents. See an example on a blog post from Gangsheet’s blog.

Monday, October 7, 2024

How to Add Cloudflare Turnstile to Your Ruby on Rails Application

tl;dr - the rails-cloudflare-turnstile gem1 (GitHub link) is a great way to add Cloudflare Turnstile to your Rails app. Let’s learn how to use it!

Setup

First, install the gem:

$ bundle add rails-cloudflare-turnstile

If you haven’t enabled Turnstile yet in your Cloudflare account, follow the “Get Started” guide. You’ll need a sitekey and secret key. Make sure to associate it with your domain too - for instance, kristianfreeman.com - in the Turnstile settings.

Add your sitekey and secret key to your Rails app - I like to use rails credentials:edit:

$ rails credentials:edit -e development
$ rails credentials:edit -e production

My credentials are structured like this:

cloudflare_turnstile:
  site_key: foo
  secret_key: bar

Create an initializer file, called config/initializers/turnstile.rb:

RailsCloudflareTurnstile.configure do |c|
  c.site_key = Rails.application.credentials.cloudflare_turnstile[:site_key]
  c.secret_key = Rails.application.credentials.cloudflare_turnstile[:secret_key]
  c.fail_open = false
end

Usage

First, we’ll add the Turnstile JS script into an application layout file. If you are super performance-sensitive, you may want to do this specifically on the pages you’re going to use Turnstile. Here, I’ll just add it in app/views/layouts/application.html.erb:

<head>
  <%= cloudflare_turnstile_script_tag %>
</head>

In your forms, you can use the <%= cloudflare_turnstile %> partial to embed the Turnstile UI element right into your form. For instance, on a signup page:

<div>
  <%= cloudflare_turnstile %>
  <%= f.submit t("passwordless.sessions.new.submit"), class: "btn" %>
</div>

Importantly, you also need to validate on the server-side! Speaking as a CF employee who has talked to the Turnstile team, there is a lot of people implementing Turnstile… without the server-side validation.

Users are created in my app in UsersController#create. Let’s validate the Turnstile data before calling that method:

class UsersController < ApplicationController
  before_action :validate_cloudflare_turnstile, only: [:create] if Rails.env.production?
  rescue_from RailsCloudflareTurnstile::Forbidden, with: :forbidden_turnstile

  def create
    # Implementation of creating users
  end

  private
  def forbidden_turnstile
    flash[:error] = "We had a problem creating your account."
    redirect_to root_path
  end

If the validation fails, the gem will throw an RailsCloudflareTurnstile::Forbidden exception. You need to rescue_from that exception in the controller, and do something with that failure. I prefer not to tell users - they’re probably spammers - that their validation against Turnstile’s rules fail.

Turnstile is great - really easy to configure, and literally saving me money by reducing the strain on my servers and ancillary analytics/marketing products, by reducing the number of junk users being added to my app. It only took ~15 minutes to get this implemented on one of my Rails apps, and I’m already seeing results.

Footnotes

  1. Not my gem! But it’s great - thanks to Instrumentl for building it.

Sunday, October 6, 2024

How to use Lucide icons via a CDN

Similar to my standardization on shadcn, I’m starting to lean on Lucide as my icon library of choice. Usage in React is really simple - you install the package, import whatever icon you want, and get an SVG that can be customized via sane props for size, color, etc (and you can pass CSS, too).

If you want to use it in other places, you’ll need to use lucide-static (documentation). I recently shipped icons in my nav on this website, and I was impressed with how easy it was to use. It’s just an img tag, with unpkg.io as the CDN. Here’s how I use it in my nav:

<img src="https://unpkg.com/lucide-static@latest/icons/house.svg" /> Home

Sunday, September 29, 2024

Deploying Astro Applications to Cloudflare

I’m in on Astro. I haven’t quite figured out every little detail of it, but it feels comprehensive enough that I’m ready to invest more time and energy into it being the new way that I deploy full-stack JS apps.

Overview

Here’s how I deploy Astro apps to Cloudflare:

  1. Create a new Astro app (npm create astro)
  2. Add the Cloudflare config (npx astro add cloudflare). This installs the @astrojs/cloudflare package.
  3. Ensure astro.config.mjs looks like the below code sample. The snippet makes some assumptions that I’ll explain shortly.
  4. Create wrangler.toml and configure it as seen below.
  5. Install @cloudflare/workers-types and reference it in tsconfig.json.

Files

astro.config.mjs:

// @ts-check
import { defineConfig } from 'astro/config';
import cloudflare from '@astrojs/cloudflare';

export default defineConfig({
  output: 'server',
  adapter: cloudflare({
    imageService: "passthrough",
    platformProxy: {
      enabled: true,
    },
  }),
});

src/env.d.ts:

/// <reference path="../.astro/types.d.ts" />
type Runtime = import('@astrojs/cloudflare').Runtime<Env>;
declare namespace App {
  interface Locals extends Runtime { }
}

tsconfig.json (add this line to your full config):

{
  "compilerOptions": {
    "types": ["@cloudflare/workers-types"],
  },
}

wrangler.toml:

name = "appname"
compatibility_date = "2024-09-25"
pages_build_output_dir = "dist"

How your app works

What assumptions/decisions this makes:

  1. Full-stack apps

The output: 'server' line in the Astro config makes it so that Astro generates server-side rendering code so that Cloudflare runs the app on the edge. This allows you to write functions that have full access to requests.

If you want a static app – no dynamic code or functions – you can change the output line. But it’s probably worth it just to keep it as server since CF supports it, in case you need a dynamic function/page at some point.

  1. Deploy to Cloudflare Pages

Candidly, the CF Pages/Workers story is a bit confusing right now.1 The way that Cloudflare handles deploys with wrangler leads to this deployment strategy. Simply put – astro build will build to dist with this config, and the pages_build_output_dir directive in wrangler.toml will upload the directory and use it as the Cloudflare Pages deployment.

If the story continues to change with Cloudflare Pages, it may be that the app technically gets deployed to Workers instead of Pages, but for now, this solution is just fine. I’ll revisit this if needed.

  1. Use platformProxy and write platform-specific functions

With the platformProxy configuration enabled in astro.config.mjs, you can use Astro’s built-in commands to do most of your day-to-day development. astro dev will wrap around wrangler pages dev with full support for Cloudflare bindings, local/remote development, etc. package.json scripts looks like:

{
  "scripts": {
    "dev": "wrangler types && astro dev",
    "deploy": "npm run build && wrangler pages deploy",
  }
}

With everything configured correctly, all your JS stuff on the frontend should work as you’d expect. On the “backend”, functions are really powerful. File-based routing, you get full access to bindings, and you write full-stack functions to handle requests and return responses:

// src/pages/api/example.ts
export async function PUT({ locals, request }) {
  // Get access to env bindings
  const { env } = locals.runtime;
  // Do stuff
  return new Response("OK")
}

Conclusion

I’ll update this as needed, as I continue to build with Astro. But so far, I’m impressed. I invested a lot of time into Gatsby to learn how to build full-stack apps on it. Astro seems like a more robust solution to a lot of the issues that I ran into with more complex applications on Gatsby.

Footnotes

  1. Speaking as a Developer Advocate at Cloudflare.

Sunday, September 22, 2024

Create a Zellij instance with a useful session name

zellij is a great modern alternative to tmux.

The only thing I don’t like about it is the random session names. Every session is automatically named something like colorful-aardvark when you start it up. This makes it very hard to find existing sessions for projects when you come back to them.

The za alias either uses the current directory name, or a parameter as the session_name variable. It then tries to join an existing zellij session with that name, or creates a new one:

za() {
  local session_name=${1:-${PWD:t}}
  zellij attach "$session_name" || zellij -s "$session_name"
}

Now, in any directory, you can type za. For instance, if I’m in ~/src/blog, it will look for a session called blog and join it, or create a new one if it doesn’t exist.

Saturday, June 1, 2024

Quick review of Zellij

There are two pieces of sticky software that I have never been able to escape from: vim & tmux.

tmux in particular was a huge upgrade for me over my previous workflow. It was the first time I realized that terminals could be smarter and do splits, tabs, etc.

Those things are built into terminal apps now, but ten years ago, as I was getting comfortable in the command-line, it was mind-blowing!

But I have always fought with tmux over a few specific things: sane scroll behavior, keybindings, and theming.

Not sure where I first saw Zellij, but when it was mentioned as a modern tmux, I was intrigued! Now, after a few weeks of usage, I’m confident Zellij is going to replace tmux on all my machines.

It has great defaults for those three things I mentioned above. Scrolling via the trackpad just works in Zellij. Zellij makes its keybindings very visible (see screenshots), but it also has support for the default tmux keybidings, which is a neat touch considering I have a decade of muscle memory doing <C-b> n, <C-b> p, etc. I’ll slowly relearn those habits, but having that to fallback onto is great.

I’m excited to dig into some of the more advanced functionality too - I see that it has floating shell support, it can revive sessions after they’ve been killed, and I’m sure there’s a lot more I haven’t even discovered yet.

Like a lot of really good CLIs, it’s been unobtrusive while I get my bearings. The complex stuff is still there! But I know it will stay out of my way until I’m ready for it.

It’s a fun time to be into command-line tools! So many new things to try out.

Saturday, May 25, 2024

How to fix 𝕏's broken "Download an archive of your data" feature

I recently exported my Twitter (now 𝕏) data archive to clean up some old tweets. Once I started generating it, it took about 24 hours to create the archive.

The URL I received to download the archive was in the structure https://ton.twitter.com/i/ton/data/archives/123456789/twitter-2024-05-25-hash.zip.

Trying to access this URL gave me an error in Safari. I tried to do it in Chrome, but got a 401 Unauthorized error as I wasn’t logged in to 𝕏 on Chrome.

I realized that recently, the app migrated to the URL x.com.

I tried accessing https://ton.x.com/i/ton/data/archives/123456789/twitter-2024-05-25-hash.zip, and it worked correctly — the fix being ton.x.com, instead of ton.twitter.com. It seems like the URL was hardcoded somewhere and when they moved it over, they didn’t update the code.

Friday, June 23, 2023

How to set up a new macOS system, using Homebrew and Brewfile

I recently bought a new Mac Studio. I needed a fast and organized way to set it up, and also wanted to keep a track of the software I was installing. I wanted to create a system that would allow me to easily replicate the same setup on future computers.

That is why I developed this open-source utility, which is available for everyone on GitHub. You can find it at codewithkristian/computer.

This utility leverages Homebrew, a software management tool for Macs, in combination with a special file known as a Brewfile.

In this blog post, I’ll quickly cover the Brewfile, which I would guess many people don’t know exists, as well as explain some of the tooling I wrote around Brewfile to make it easier to manage for my needs.

Understanding the Brewfile

The purpose of the Brewfile is to manage all the apps and programs you install on your computer. Formatted as a simple list, each line represents an application or a package that can be installed through Homebrew. It separates brews, which are Homebrew packages, and casks, which are Mac apps installed through homebrew-cask.

Here’s a short example of what a Brewfile looks like:

brew "ffmpeg"
brew "gnu-sed"
brew "mkvtoolnix"
brew "neovim"
brew "python@3.11"
brew "rclone"
cask "1password"
cask "amethyst"
cask "arq"
cask "karabiner-elements"
cask "keyboard-maestro"
cask "kitty"
cask "vlc"

To install the packages, you can use the brew bundle command.

The Brewfile reduces the complexity of remembering the needed commands for installation and allows that process to be automated by putting it all into one, easily managed file. This enables you to replicate your software configuration on any new system effortlessly. In this way, you ensure that your setup is consistent across different devices, saving a significant amount of time.

Usage

My open-sourced computer repository manages two primary tasks that go above and beyond a simple Brewfile:

  1. Track the Brewfile in source, preferably on GitHub.
  2. Provide utilities around the Brewfile to make it easy to add and manage new packages.

To start with a new computer, I can clone the repository down and run ./install in the terminal which installs all the packages.

Additional apps can be installed by using the commands ./add-brew brew-name or ./add-cask cask-name in the terminal, where ‘brew-name’ and ‘cask-name’ are replaced with the names of the apps.

To streamline the management of these apps, the utility comes with commands like ./sort-brewfile, which keeps your Brewfile neat and ordered, and ./commit, that makes a new log entry with the current time as the description. Each time I add a new brew or cask entry to Brewfile, the file is automatically sorted, a new commit is created, and pushed up to GitHub. This means it’s always up-to-date (without me having to manage it manually).

Tailor it for Your Needs

As this utility is open-source, it can be cloned and modified according to your specific needs. Modify it and use it to quickly set up your new computer the way you want.

It’s worth noting that if you’re coming from an existing set up, you can “dump” your installed Homebrew brews/casks using brew bundle dump, which will write everything into the current directory. But for my uses, I wanted to start from scratch because my previous computers have a lot of cruft that I didn’t necessarily want to include in my new machine.

Hopefully, this utility will be of help to users wanting a systematic and customizable way to set up new computers. By recording your particular computer’s configuration, you can easily duplicate it across multiple systems, saving precious time and effort.