Saturday, January 3, 2026

Agentic coding is the default

At some point last year, agentic coding became the default in my workflow. This causes a lot of FUD for some developers. Will I forget how to code? When I learn new things, will I actually understand how they work?

Honestly, I’m not sure of the answers to these questions. But the productivity gains are too incredible to ignore. I can kick off brand new projects and make sizeable progress in minutes; I can finish projects in hours. I can throw OpenCode (yep, I migrated from Claude Code in December) at a problem and Opus 4.5 will keep it cruising for 10-20 minutes without me. I can run multiple OpenCode instances for multiple projects at once, and jump around doing some light context switching to ship at a pace that is probably 100x what I could have done before.

I don’t really know what that means for the future of software development. Things are clearly changing, and if you aren’t interested or trying these tools, I think you’re going to be left behind. Will my skills - the ones I spent over a decade building from scratch - become rusty? Probably. But does that matter when AI is consuming software, creating software, and becoming the way our tools interact with each other? Maybe not.

Saturday, October 18, 2025

My talk from Hono Conference 2025

I’m very excited to be speaking at Hono Conference 2025. As a permalink for what I’ll be speaking about there, this post is an overview of my talk, pertinent links, and other things related to the talk.

Resources

Cloudflare Workers x Hono

My talk is about Cloudflare Workers and Hono - how they pair well together, and my history with each of them. Here’s some longer form thoughts on that.

I began working at Cloudflare as a developer advocate in 2019. At that time, Workers was pretty new. But the platform was very powerful. I came from a background of building on AWS Lambda, so I was familiar with the idea of building serverless applications - specifically, with Node. Workers was (and still is) not Node, so in addition to learning the differences between the Workers platform (not being able to use express being one of the primary differences) there was also ergonomic differences in writing Workers applications. The standard Workers application in 2019 was implemented as a service worker, using an event listener to hook into a fetch event and return a response:

addEventListener("fetch", event => {
  return new Response("Hello, world")
})

This code was concise, but I quickly learned it didn’t scale well to full applications. Specifically, routing became a primary concern in many of the fundamental tutorials I wrote in 2019-2020 for the Cloudflare Workers documentation. There were some solutions that popped up (itty-router was one of the primary precursors to Hono that I became familiar with), but there wasn’t a true full-stack routing system for Workers that felt native to the platform.

The second variant of a Workers application became available later, called a “module worker”. This format looked like an ES module, and was built to eventually support multiple classes and entrypoints inside of a single application. The syntax was even more concise, which was great:

export default {
  fetch: request => {
    return new Response("Hello, world")
  }
}

As the platform matured, and the platform made use of module workers more effectively, you could define additional events inside of that module, as well as other classes that could be exported side-by-side with the module, such as Durable Object classes:

export MyDurableObject extends DurableObject {
  function fetch(event) {
    return new Response("Hello from a Durable Object")
  }
}

export default {
  fetch: request => {
    return new Response("Hello, world")
  },
  scheduled: event => {
    // Handle a reoccurring scheduled event
  }
}

In the following years, the bindings concept in Cloudflare Workers became more poewrful, and ubiquitous in large-scale Workers applications. Bindings allowed the Workers runtime to hook various resources from the larger ecosystem (what we began to thematically refer to as the “Cloudflare Developer Platform”, or “Workers Platform”) directly into your Workers application. This meant that tools like Workers KV, an eventually-consistent key-value store, and later, Cloudflare D1, our SQLite database, could be used directly in Workers applications without any additional setup code - you could create the resource, define the binding in a configuration file, and it became usable immediately:

# wrangler.toml
name = "my-workers-app"

[[kv-namespaces]]
id = "f39f24ff-15c2-4dbd-a21e-b0d657bef48f"
binding = "KV"

The KV namespace, identified via the namespace ID, became available as KV:

export default {
  fetch: async (request, env) => {
    const message = await env.KV.get("message")
    return new Response(message || "Hello, world")
  }
}

In short, there were a number of additions to the ecosystem that made Workers incredibly compelling from an ergonomics perspective. It was concise, and powerful platform-level primitives were usable via just a few lines of code. But it still was missing a fundamental way to built large-scale, fullstack applications in a friendly way.

Enter Hono

I first came across Hono in 2021 via a pull request. During that time, I spent a good part of my day-to-day reviewing pull requests, and helping grow our documentation for Workers and the rest of the associated developer platform tools. I’m sad to say that I missed the initial PR where Yusuke Wada, the creator of Hono, added it to our examples section of the Workers docs, but I caught the second PR, with a few typos and bugfixes. I don’t think we had a large amount of interest in Workers (at least, from my knowledge at the time) from developers in Japan, so seeing Yusuke contribute a pull request caught my eye. I followed the link he shared to hono.dev, to check out the framework.

I was immediately very impressed. Hono looked like a great solution to the problem we had faced, not just on the developer relations team, but on the Workers platform as a whole. A routing system for Workers, combined with first-class support for bindings.

Imagine we built a simple API system for both reading and writing to a KV namespace. Any given key could be specified via a URL pathname, with HTTP methods (GET and POST) used as the differentiator between reading or writing. In vanilla Workers, it would look something like this:

export default {
  fetch: async (request, env) => {
    const { method, url } = request.url
    const u = new URL(url)
    const key = u.pathname.replace('/', '')
    if (method == "POST") {
      const body = await request.json()
      if (body.value) {
        await env.KV.put(key, value)
        return new Response("OK")
      } else {
        return new Response("Missing value in body", {
          status: 402
        })
      }
    } elsif (method == "GET") {
      const value = await env.KV.get(key)
      if (!value) {
        return new Response("No message found", {
          status: 502 // TODO: is this the right status code?
        })
      } else {
        return new Response(value)
      }
    } else {
      return new Response("Method not allowed", {
        status: 405
      })
    }
  }
}

There’s a lot of compromises here in order to make this API work cleanly. With no native routing, we match on the inbound request method, and do some sketchy string replacement to approximate a URL-driven “key”. Reading JSON out of the request body is similarly brittle. We could grade this as B- code: it certainly gets the job done, but it won’t hold up to scrutiny and is pretty easy to crash.

Moving this to Hono immediately condenses the code:

const app = new Hono()

app.get("/:key", async c => {
  const { key } = c.req.params
  const value = await c.env.KV.get(key)
  if (!value) {
    return c.text("No message found", 502)
  } else {
    return c.text(value)
  }
})

app.post("/:key", async c => {
  const body = await c.req.json()
  if (body.value) {
    await c.env.KV.put(key, value)
    return c.text("OK")
  } else {
    return c.text("Missing value in body", 402)
  }
})

app.all("/:key", c => {
  return c.text("Method not allowed", 405)
})

(I had more to say here originally, but tbh I lost steam and I need to work on my slides. Sorry!)

Saturday, June 7, 2025

Ecovacs Goat A3000 Review

I recently purchased the Ecovacs Goat A3000 robot lawn-mower (affiliate link). This is an in-progress review as I use it.

Should you buy it?

Maybe. I like a lot about it so far, but I’m still in the first week of setting it up and using it. Stay tuned.

If it can reliably do the majority of what it says it can do, and you can afford it (it was $3k!) it might be worth it. I have a third of an acre, so it’s a lot to mow. The interior of my house is vacuumed on a pretty regular basis by robot vacuums. There is a “wow, cool” factor already on day one, as I see it out in the yard taking care of my grass. Nifty!

Setup

The installation will take roughly an hour. That includes taking everything out of the box, putting together the charger, the associated RTK station, and connecting the lawn mower itself to the app.

The RTK station is the secret sauce that helps the mower understand where it is. I’ll add some pics here at some point. I didn’t understand that there was an RTK station until I put everything together. This means that you’ll need to have an ugly pole about six feet up in the air, somewhere in your yard.

The good news is that the RTK station is pretty small, and it’s not too obtrusive. The bad news is that it’s a pole in your yard. I put mine in the back corner of my yard, where it’s not too visible from the street.

Initial impressions

The mower is pretty cool. It’s a lot quieter than I expected, and it’s pretty fast. It moves around the yard at a decent clip, and it seems to be doing a good job of cutting the grass. I’m excited to see how it does over the next few weeks.

One thing I’m a little concerned about is how well it will handle obstacles. I have a few trees in my yard, and I’m curious to see how well it navigates around them. The app has a feature where you can set up “no-go zones” where the mower won’t go, so I’m going to use that to keep it away from my flower beds.

Conclusion

I’ll update this post as I use the mower more. So far, I’m pretty happy with it. It’s a cool piece of tech, and it’s fun to watch it work. I’m looking forward to seeing how it does over the next few months.

Thursday, April 17, 2025

Setting up TeslaMate with Docker and Proxmox

TeslaMate is an open-source tool that allows you to monitor your Tesla vehicles. It’s a great tool for keeping track of your vehicle’s location, battery level, and other important information. I recently bought a Model Y, and I wanted to be able to monitor the car remotely. TeslaMate also tracks drives - where you started, where you ended, and the drive efficency - which will be a good way to track business travel.

My home lab runs Proxmox. I wanted to be able to run TeslaMate on my home network, so I set up a Docker container. Here’s how I did it. (And if you find this useful, or end up buying a Tesla… use my referral code!)

Create a container

Whenever I want to deploy something in Proxmox, I start with the Proxmox VE Helper-Scripts repo. It has a collection of scripts for creating Proxmox containers.

The Docker script sets up an LXC container (Debian under the hood) with Docker installed. You can run it in the Proxmox console:

$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/docker.sh)"

This sets up a new container called docker. It also has docker-compose installed by default.

Better defaults

The LXC container that Proxmox creates is called docker. This isn’t super helpful when running multiple containers in the same Proxmox instance. Each LXC container has a configuration file in /etc/pve/lxc/<id>.conf, named after the container ID. Let’s modify the file for our docker container:

arch: amd64
cores: 2
features: keyctl=1,nesting=1
# Rename the container to something more useful
hostname: teslamate
memory: 2048
net0: name=eth0,bridge=vmbr0,hwaddr=BC:24:11:85:3B:B5,ip=dhcp,type=veth
onboot: 1
ostype: debian
rootfs: local-lvm:vm-105-disk-0,size=4G
swap: 512
tags: community-script;docker;tailscale
# Add configuration to allow Tailscale to run
unprivileged: 1
lxc.cgroup.devices.allow: c 10:200 rwm
lxc.mount.entry: /dev/net dev/net none bind,create=dir

Restart the container inside of the Proxmox UI before continuing.

Configuring the container

Inside of the container, I set up a new directory for TeslaMate:

$ mkdir -p /etc/teslamate

I set up a docker-compose.yml file in that directory:

services:
  teslamate:
    image: teslamate/teslamate:latest
    restart: always
    environment:
      - ENCRYPTION_KEY=ENCRYPTION_KEY
      - DATABASE_USER=teslamate
      - DATABASE_PASS=mysecurepassword
      - DATABASE_NAME=teslamate
      - DATABASE_HOST=database
      - MQTT_HOST=mosquitto
    ports:
      - 4000:4000
    volumes:
      - ./import:/opt/app/import
    cap_drop:
      - all

  database:
    image: postgres:17
    restart: always
    environment:
      - POSTGRES_USER=teslamate
      - POSTGRES_PASSWORD=mysecurepassword
      - POSTGRES_DB=teslamate
    volumes:
      - teslamate-db:/var/lib/postgresql/data

  grafana:
    image: teslamate/grafana:latest
    restart: always
    environment:
      - DATABASE_USER=teslamate
      - DATABASE_PASS=mysecurepassword
      - DATABASE_NAME=teslamate
      - DATABASE_HOST=database
    ports:
      - 3000:3000
    volumes:
      - teslamate-grafana-data:/var/lib/grafana

  mosquitto:
    image: eclipse-mosquitto:2
    restart: always
    command: mosquitto -c /mosquitto-no-auth.conf
    # ports:
    #   - 1883:1883
    volumes:
      - mosquitto-conf:/mosquitto/config
      - mosquitto-data:/mosquitto/data

volumes:
  teslamate-db:
  teslamate-grafana-data:
  mosquitto-conf:
  mosquitto-data:

Obviously, change the ENCRYPTION_KEY and DATABASE_PASS values to something secure. Each instance of mysecurepassword needs to be the same password - these services are all connecting to the same database.

TeslaMate is basically a collection of services - the database, a web server/UI, and a Grafana instance.

Starting up TeslaMate

Now we can start up the containers:

$ docker-compose up -d

You can connect to the TeslaMate UI at http://<your-proxmox-ip>:3000. Use the login from the docker-compose.yml file.

The Grafana UI contains most of the metrics you’ll want to look at, as well as default dashboards. You can connect to it at http://<your-proxmox-ip>:4000. When you first login, you’ll need to change the password.

Authenticating with Tesla

You’ll need to generate a Tesla API key. On macOS, I used TeslaAuth to do this. It’s a CLI you can run locally that spins up a browser window and lets you log in to your Tesla account. Once you’ve authenticated, it will print out API and refresh tokens. You can paste that into TeslaMate to allow it to connect to your car via Tesla’s API.

Set up Tailscale

If we want to access the TeslaMate UI from outside of our network, we need to set up Tailscale. This is easy with this add-tailscale-lxc script. Run this in the Proxmox console (not the LXC container), and select the teslamate container.

$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/add-tailscale-lxc.sh)"

Once Tailscale is setup, jump back into the LXC container console. You can start the Tailscale agent with:

$ sudo systemctl start tailscaled
$ tailscale up

Once you’ve authenticated to your Tailscale network, you can access the TeslaMate UI easily from any other machine in your Tailscale network: http://teslamate:3000 or http://teslamate:4000.

Dashboards

There’s a ton of info in the default dashboards set up by TeslaMate. The “Drives” one is a good example, showing every drive I’ve taken in my car:

TeslaMate Grafana dashboard

There’s a ton of other dashboards too, for tracking all sorts of information about the car:

TeslaMate Grafana dashboard

Raycast extension

A neat party trick is using the TeslaMate extension for Raycast to view your car info directly in Raycast. The only requirement is that your computer needs to be in the same Tailscale network as your TeslaMate instance.

TeslaMate Raycast extension

Note that a bunch of stuff here doesn’t render correctly. It’s on my list of things to look at - since all the extensions in Raycast are open source, I could contribute some bug reports or even fix them myself. For now, I care mostly about seeing the battery status - especially if I’m charging at work. It’s awesome to have access to it from Raycast.

There’s a bit of setup needed for this extension to work. Using the instructions in the Raycast extension docs, you can set up a Service Account Token and datasource UID:

How to create Service Account Token for Grafana

1. Go to your Grafana instance
2. In the left menubar click on Users and access
3. Click on Service accounts
4. Click on Add service account
5. Choose any Display name for your service account
6. Set service account role to Viewer
7. Click on Create
8. Click on Add service account token
9. Choose any Display name for your service account token
10. Set Expiration to No expiration
11. Copy your token to the TeslaMate Raycast Extension 🎉

How to get the UID of the datasource

1. Go to your Grafana instance
2. In the left menubar click on Connections -> Data sources
3. Click on the DB TeslaMate PostgreSQL
4. The URL should now show something like /connections/edit/Pxxxxxxxxx
5. The Pxxxxxxxxx is the UID of your data source - copy it to the TeslaMate Raycast Extension 🎉

Conclusion

It’s fun to have API access to your car. TeslaMate takes a lot of the raw data out of the car and makes it available in a nice series of dashboards. It also can be run for free, using off-the-shelf Docker Compose scripts and the computers you probably already have running at home.

If you find this useful, or end up buying a Tesla… use my referral code!

Monday, April 14, 2025

Rewriting functionality with Claude Code

Claude Code is an awesome AI code agent that can inspect a local repository and add, remove, or modify code. It’s great (but expensive!) and I’ve used it on my personal site a bit. I also produced a video for Cloudflare’s Developers YouTube channel on how it works with Cloudflare projects:

Would I trust it for production code? Probably not. But I like it a lot for a wide swath of in-progress projects I’m working on that need quality-of-life improvements to make it a bit easier to work with.

Real-world example

Back at the end of last year, I wrote about my Nix config and how I use it to manage my development environment across multiple machines.

Since my Nix config has grown, there’s been a lot of code re-use. Each machine has a specific Nix configuration file, and since I use nix-homebrew, each machine started to grow a big config. Here’s an example:

{ config, pkgs, ... }:

let 
in
{
  environment.systemPackages = [
    pkgs.consul
  ];

  homebrew.casks = [
    "raycast"
  ];
}

This scales pretty well, but what happens when you start adding new casks across multiple machines? You’ll have to add them to each machine’s config. This is a lot of repetition, and it’s easy to forget to add a new cask to a machine.

I used Claude Code to help me automate this. I asked it to take a look at every machine, and extract a common config between each of them.

The initial result didn’t work, but after pasting in the error message a few times, Claude Code was ultimately able to figure out what was wrong. The final result was really well-structured, and importantly, gives me a good foundation for syncing these machines in the future.

This first file is the new per-machine config. It also includes specific overrides per machine. For instance, this is my laptop, so I have some specific laptop-only packages here.

{ config, pkgs, ... }:

let 
in
{
  # Import common macOS configuration
  imports = [
    ../darwin-common.nix
    ../darwin-homebrew.nix
    ../roles/development.nix
    ../roles/media.nix
  ];

  # Host-specific overrides and additions
  
  # Host-specific additional Nix packages
  environment.systemPackages = [
    pkgs.consul
  ];

  # Host-specific additional Homebrew casks
  homebrew.casks = [
    "ledger-live"
    "steam"
  ];

  # Host-specific additional Mac App Store apps
  homebrew.masApps = {
    MacFamilyTree = 1567970985;
  };
}

darwin-homebrew.nix is the base shared config for setting up Homebrew on macOS. It includes the shared casks, taps, and brews, as well as the Mac App Store apps, that get installed on all machines.

# Common Homebrew configuration for macOS systems
{ config, pkgs, ... }@args:

{
  # Install homebrew if not yet installed
  nix-homebrew = {
    enable = true;
    enableRosetta = true;
    user = "kristian";
  };

  # Common Homebrew configuration
  homebrew = {
    enable = true;
    onActivation.cleanup = "uninstall";
    taps = [
      "homebrew/homebrew-core"
      "homebrew/homebrew-cask"
    ];
    
    # Common brews across all macs (can be overridden in host-specific configs)
    brews = [];
    
    # Common casks across all macs
    casks = [ 
      "1password" 
      "caffeine"
      "flux"
      "font-atkinson-hyperlegible"
      "font-jetbrains-mono-nerd-font"
      "jordanbaird-ice"
      "loom"
      "macwhisper"
      "ollama"
      "raycast"
      "recut"
      "the-unarchiver"
    ];

    # Common Mac App Store apps
    masApps = {
      Noir = 1592917505;
      OnePasswordExtension = 1569813296;
    };
  };
  
  # Common Nix packages for all macOS hosts
  environment.systemPackages = [
    pkgs._1password-cli
    pkgs.mas
    pkgs.yt-dlp
  ];
}

I also added domain-specific configs. media.nix is for media management, players, etc. This is additive to the base config, so I can add new packages and apps to it.

# Media machine role configuration
{ config, pkgs, ... }:

{
  # Media tools via Homebrew
  homebrew.casks = [
    "audacity"
    "calibre"
    "macwhisper"
    "plexamp"
    "sonos"
    "splice"
    "spotify"
    "transmission"
    "ultimate-vocal-remover"
    "vlc"
  ];
  
  # Media-focused Mac App Store apps
  homebrew.masApps = {
    PixelmatorPro = 1289583905;
  };
  
  # Media tools via Nix
  environment.systemPackages = [
    pkgs.ffmpeg
    pkgs.imagemagick
  ];
}

This is the kind of stuff AI is so good at. I probably wouldn’t have spent the time to figure this out myself, at least, not without an afternoon to work on it. But I could spend 5-10 minutes and get a really minor todo off of my list using Claude Code. Highly recommended!

Monday, February 24, 2025

"Act as my personal strategic advisor"

It’s always interesting to see AI prompts written very differently from how you would write them. I found this prompt yesterday, and it was super effective to unblock me on some stuff:

Act as my personal strategic advisor with the following context:

- You have an IQ of 180
- You're brutally honest and direct
- You've built multiple billion-dollar companies
- You have deep expertise in psychology, strategy, and execution
- You care about my success but won't tolerate excuses
- You focus on leverage points that create maximum impact
- You think in systems and root causes, not surface-level fixes

Your mission is to:

- Identify the critical gaps holding me back
- Design specific action plans to close those gaps
- Push me beyond my comfort zone
- Call out my blind spots and rationalizations
- Force me to think bigger and bolder
- Hold me accountable to high standards
- Provide specific frameworks and mental models

For each response:

- Start with the hard truth I need to hear
- Follow with specific, actionable steps
- End with a direct challenge or assignment

This prompt makes decisions. It gave me some tough advice about things to focus on. Normally, I’m pretty collaborative with LLMs (“let’s figure this out together”), but this prompt is an interesting shift on that (“tell me what to do”).

I normally use Claude. But this prompt is really good in ChatGPT because ChatGPT has memory: it knows what I’m working on and what decisions I’ve already made. So it can pull up all of the things we’ve talked about, and evaluate them. As it says in the prompt - it can be brutally honest and direct. It’s pretty neat.

Give it a shot, you’ll probably find the output thought-provoking.

Monday, February 24, 2025

Image Binding in Workers

You can now interact with Cloudflare Images from inside of a Workers application, using a new images binding. This allows you to load images, transform and manipulate them, and even generate watermarked images with just a few lines of code.

In this post, I’ll show you how I built a simple watermarking URL path in Workers on top of this site, my personal blog. It will automatically add a watermark to any image that passes through it. Here’s an example of what it looks like:

Example img

See that little watermark in the bottom right? Neat!

import Example from '@/images/example-for-watermark.png';

<img src={`/cgi-bin/watermark?url=${Example.src}`} alt="Example img" />

Setup

First, add the new [images] directive to your wrangler.toml file:

name = "kristianfreeman-astro"
compatibility_date = "2024-10-22"
main = "./worker.ts"

[images]
binding = "IMAGES"

Defining the Workers function

My site is built with Workers Assets. When wrangler deploys this site, it bundles up the dist folder and uploads it to Cloudflare. Additionally, you can define a Workers script that will run alongside the site. This allows you to define custom behavior.

Below, I’ll define a new worker.ts file in the root of my project. We can check the request URL and see if it matches the path we want to intercept. If it does, we’ll fetch the image, and then watermark it.

export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    if (url.pathname.startsWith("/cgi-bin/watermark")) {
      const imagePath = url.searchParams.get("url");
      const imageUrl = new URL(imagePath, request.url).href;

      try {
        // Load the image based on url search param
        const imageResp = await env.SITE.fetch(new Request(imageUrl));
        // Get the watermark image
        const watermarkResp = await fetch("https://pub-b4e6ed9616414ace9314e84c0a5cd3e8.r2.dev/kf.jpg");

        const response = (
          // Take the image and begin processing it
          await env.IMAGES.input(imageResp.body)
            // Draw the watermark on top of the image
            .draw(
              env.IMAGES.input(watermarkResp.body)
                .transform({ width: 100, height: 100 }),
              { bottom: 10, right: 10, opacity: 0.75 }
            )
            // Output the final image as PNG
            .output({ format: "image/png" })
        ).response();

        return response;
      } catch (error) {
        console.log(error);
        // If something goes wrong, fall back directly to the image
        return fetch(imageUrl);
      }
    }
    return env.ASSETS.fetch(request);
  }
}

There’s one wrinkle here specifically for any Workers-derived projects. You’ll need to set up a service binding to correctly request a path on the same URL as your application. For instance, if you make a fetch request to https://kristianfreeman.com/image.png from inside a Workers app connected to that same website, it will try to make a request to the origin - which doesn’t exist. You’ll get a 522 status code back.

To fix this, you can add a service binding to your wrangler.toml file:

name = "kristianfreeman-astro"
compatibility_date = "2024-10-22"
main = "./worker.ts"

services = [
  { binding = "SITE", service = "kristianfreeman-astro" }
]

[assets]
binding = "ASSETS"
directory = "./dist"
html_handling = "drop-trailing-slash"
not_found_handling = "404-page"

[observability]
enabled = true

[images]
binding = "IMAGES"

Now, by making a fetch request through that binding, requests stay inside the app, and will correctly resolve to the Workers Assets bundles or whatever other asset/path you’re trying to retrieve.

Thursday, January 23, 2025

Integrating Workers Assets with Fullstack Apps

The Cloudflare team1 recently released Workers Assets, a feature that allows you to serve static assets from Workers. More from the docs:

You can combine asset hosting with Cloudflare’s data and storage products such as Workers KV, Durable Objects, and R2 Storage to build full-stack applications that serve both front-end and back-end logic in a single Worker.

This is a great way to combine the power of Workers with the flexibility of a full-stack app. To use it, you just pass an assets directive to your wrangler.json file:

{
  "name": "saas-admin-template",
  "main": "./src/index.js",
  "assets": {
    "directory": "./dist"
  }
  // ...remainder of file
}

The src/index.js file is just a traditional Workers application. You can use it to serve requests for specific paths, and depending on your Assets configuration, it will call the Workers app conditionally, based on the presence of an asset:

export default {
  async fetch(request, env) {
    const url = new URL(request.url);
    if (url.pathname.startsWith("/api/")) {
      return new Response("Ok");
    }
    // Passes the incoming request through to the assets binding.
    // No asset matched this request, so this will evaluate `not_found_handling` behavior.
    return env.ASSETS.fetch(request);
  },
};

By doing this, you can serve your assets from the dist directory, and your code from the src directory. If an asset is found in the dist directory, it will be served, otherwise, your code will execute.

Integrating with Fullstack Apps

I’ve been working over the past few weeks on a SaaS admin template built on Astro and D1, Cloudflare’s SQL database.

It uses the Cloudflare integration for Astro to compile the app into a bunch of JS bundles that get dynamically served by Workers. These integrations with full-stack frameworks work by essentially hijacking all requests to your app, and routing them to the appropriate bundle.

Out of the box, Workers Assets plays well with these frameworks, with just a bit of extra work. For Astro, you can update the main directive in wrangler.json to point to the index.js file inside of Astro’s compiled _worker.js bundle. dist still remains the default directory for serving assets:

{
  "name": "saas-admin-template",
  "main": "./dist/_worker.js/index.js",
  "assets": {
    "directory": "./dist"
  }
  // ...remainder of file
}

Integrating with Cloudflare primitives

This will work for serving assets correctly for full-stack frameworks like Astro. But by doing this, you’re losing the ability to integrate with any Cloudflare primitives that must be defined as additional classes. This includes things like Cloudflare Workflows, as well as traditional Service Bindings.2

In my saas-admin-template project, I have a Cloudflare Workflow (read my Workflows intro) that can be ran for any customer. Because it doesn’t get compiled into the Astro bundle, Wrangler3 doesn’t know how to find it in the source bundle. This means that the standard Workflow configuration, seen below, will fail:

{
  "name": "saas-admin-template",
  "main": "./dist/index.js",
  "assets": {
    "directory": "./dist"
  },
  "workflows": [
    {
      "name": "saas-admin-template-customer-workflow",
      "binding": "CUSTOMER_WORKFLOW",
      "class_name": "CustomerWorkflow"
    }
  ],
  // ...remainder of file
}

Building a custom wrapper

How do we fix this? For now, we can build a custom wrapper that imports everything we need, both from the Astro bundle and the Cloudflare Workflow. This imports all the code from Astro, the Cloudflare Workflow, and then exports them appropriately. This is totally manual - and a hack - but it works for now. I think we’ll fix this, either at the platform or integration-level, in the future:

import astroEntry, { pageMap } from './_worker.js/index.js'
import { CustomerWorkflow } from '../src/workflows/customer_workflow.js'

export default astroEntry
export { CustomerWorkflow, pageMap }

In build scripts, both for dev and build, we can copy the wrapper into the dist directory, and make it the main entry point:

{
  "scripts": {
    "dev": "astro dev",
    "wrangler:dev": "astro build && npm run wrangler:wrapper && npx wrangler dev",
    "wrangler:wrapper": "cp src/workflows/wrapper.js dist/index.js",
    "deploy": "astro build && npm run wrangler:wrapper && wrangler pages deploy",
  },
}

Note the difference between wrangler:dev and dev. wrangler:dev will build the Astro bundle, and then run the Wrangler dev server. dev will only build the Astro bundle and run it as a traditional Astro application.

The Cloudflare integration does have the idea of a “platform proxy”, which is supposed to natively run wrangler as part of the Astro dev process, but it does not currently support our wrapper integration, and thus any Workflows you may want to run as part of API requests won’t be available.

In our wrangler.json, we can now import our custom wrapper:

{
  "name": "saas-admin-template",
  "main": "./dist/index.js",
  "assets": {
    "directory": "./dist"
  },
  "workflows": [
    {
      "name": "saas-admin-template-customer-workflow",
      "binding": "CUSTOMER_WORKFLOW",
      "class_name": "CustomerWorkflow"
    }
  ],
  // ...remainder of file
}

With that, we can now use service bindings and Workflows in our Astro application, and have intelligent fallbacks to Workers Assets for the application’s static assets. Like I mentioned earlier, this is a hack, and I expect us to make this process in the future. But if you’re looking to build more complex applications using full-stack frameworks and Workers Assets, this is a viable way to do it right now!

Footnotes

  1. Disclosure: I work at Cloudflare as a Senior Developer Advocate.

  2. Note that any bindings-based integrations, like KV or D1, do work without any addititional config. They get added to your request env, and you can access them inside of Astro endpoints (example docs).

  3. Cloudflare’s CLI tool, Wrangler.

Wednesday, January 1, 2025

About Me

I’m Kristian Freeman. I’m an American software developer and writer, based in San Marcos, Texas.

I’m a Developer Relations Engineering Manager at Cloudflare. I’ve been in the software industry since 2012. I’ve written on this site and elsewhere about software development since then, focused on edge computing, JavaScript, and open-source tools.

Contact

You can find me on GitHub or X.

Subscribe to my RSS feed to keep up with new posts.

Thursday, November 21, 2024

How to Generate Types for a Supabase Project

You can generate TypeScript types for a Supabase project using the Supabase CLI. This is useful when you make requests to your Supabase database and want to have types as part of your build process, and autocomplete in your editor.

First, you’ll need to find your project ID. You can find this in the Supabase dashboard. Then, run the following command in your terminal:

$ npx supabase gen types typescript --project-id $projectID > ./src/database.types.ts

You can add this to your package.json scripts:

{
  "scripts": {
    "generate-types": "npx supabase gen types typescript --project-id $projectID > ./src/database.types.ts"
  }
}

This creates a types file in your src directory (change as needed). Now, you can import the types file and pass it as a type parameter to your Supabase client while you instantiate it:

import { createClient } from "@supabase/supabase-js";
import { Database } from "./database.types";

export const supabase = createClient<Database>(
  process.env.SUPABASE_URL!,
  process.env.SUPABASE_API_KEY!
);

By doing this, every submethod and function in your Supabase client will now have types.

Here’s an example of the types generated for one of my projects, which has a table called discord_users:

async function getSupabaseUser(userId: string) {
  const { data, error } = await supabase
    .from("discord_users")
    .select("*")
    .eq("discord_user_id", userId);

  if (error) {
    console.error(error);
    return null;
  } else {
    return data.length ? data[0] : null;
  }
}

const user = await getSupabaseUser(user_id);

if (user.notifications_active) { // 👈 TypeScript autocomplete, this field is a boolean
  console.log("User has notifications enabled");
}

For more details on how this works, check out Supabase’s docs.