Skip to main content

2 posts tagged with "serverless"

View All Tags

Auth0 with Hasura

· 5 min read
ShinaBR2
Life Developer

When integrating Auth0 with Hasura in a monorepo structure, what seems straightforward at first can quickly become complex. In this article, I'll share my journey of implementing single sign-on (SSO) with Auth0 and Hasura GraphQL API, focusing on creating a flexible and maintainable authentication system.

The Challenge

Hasura's documentation suggests using Auth0's user ID as the primary key in our database's users table. While this approach simplifies permissions at first glance, it creates a tight coupling between our database schema and our authentication provider. Let's explore why this might be problematic and how we can design a more flexible solution.

Standard Approach vs. My Design

The standard approach suggested by Hasura looks like this:

CREATE TABLE users (
id TEXT PRIMARY KEY, -- This would be the Auth0 user ID
name TEXT,
email TEXT
);

Instead, I've chosen this structure:

CREATE TABLE users (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(), -- Our internal identifier
auth0_id TEXT UNIQUE, -- Store Auth0 ID as a separate field
name TEXT,
email TEXT
);

This design provides several advantages:

  1. Our database maintains its own identity system
  2. We can switch authentication providers without restructuring our database
  3. Our primary keys remain consistent in format and generation method
  4. Related tables maintain cleaner foreign key relationships

The Permission Challenge

I encounter an interesting challenge with Hasura permissions. Consider a posts table with a foreign key relationship to users:

CREATE TABLE posts (
id UUID PRIMARY KEY,
user_id UUID REFERENCES users(id), -- References our internal user ID
content TEXT
);

If we were to follow Hasura's standard approach, permissions would look like this:

{
"user_id": {
"_eq": "X-Hasura-User-Id"
}
}

The challenge arises because initially, we might think to use Auth0's user ID in the X-Hasura-User-Id claim. However, this wouldn't match our internal user IDs used in foreign key relationships.

The Solution: Synchronized Authentication Flow

The key insight is that we need to synchronize user creation with claim generation. Instead of having two separate Auth0 actions, we combine them into a single, ordered process:

  1. First, ensure the user exists in our database
  2. Then, use our internal user ID in the custom claims

Here's how we implement this in a single Auth0 action:

exports.onExecutePostLogin = async (event, api) => {
// First: Synchronize user with our database
const result = await upsertUser();

// Then: Set claims using our internal ID
api.accessToken.setCustomClaim('hasura_namespace', {
'x-hasura-user-id': result.user.id, // Using our internal UUID
// other headers
});
};

This approach solves several problems:

  1. It ensures users exist in our database before setting claims
  2. It uses our internal IDs for permissions, maintaining consistency
  3. It works seamlessly for both new and existing users
  4. It keeps our database schema independent of Auth0

Why This Matters

This design choice provides several long-term benefits:

  1. Our database schema remains clean and provider-agnostic
  2. Foreign key relationships use consistent, internal IDs
  3. We can change authentication providers without restructuring our database
  4. Permissions work consistently across all related tables

Well, while this approach requires a bit more initial setup than the standard approach, it provides much more flexibility and maintainability in the long run.

Implementing Provider-Agnostic Authentication in a Monorepo

Architecture Overview

In a monorepo, we need to balance sharing code with maintaining flexibility. Here's how I structured the authentication system:

  1. Core authentication logic lives in packages/core
  2. Individual apps in apps/* remain provider-agnostic
  3. Environment variables are handled at the app level (a Turborepo requirement)

The Authentication Provider Implementation

Here's a clean implementation that abstracts away Auth0-specific details:

import { Auth0Provider, useAuth0 } from '@auth0/auth0-react';

const AuthContextProvider = ({ children }) => {
const auth0Data = useAuth0();

// Transform Auth0-specific data into our standardized format
const contextValue = {
// Our standard auth context value
};

return <AuthContext.Provider value={contextValue}>{children}</AuthContext.Provider>;
};

const AuthProvider: FC<Props> = ({ config, children }) => {
return (
<Auth0Provider
domain={config.domain}
clientId={config.clientId}
authorizationParams={{
audience: config.audience, // Critical for custom claims
redirect_uri: window.location.origin,
}}
>
<AuthContextProvider>{children}</AuthContextProvider>
</Auth0Provider>
);
};

export { AuthProvider, useAuthContext };

This design provides several benefits:

  1. Apps only interact with a generic AuthProvider and useAuthContext
  2. Auth0-specific dependencies stay contained in packages/core
  3. Switching providers only requires changing the core implementation, not application code

Critical Insights: The Auth0-Hasura Connection

The interaction between Auth0 and Hasura requires careful attention to detail. Here's a key insight I discovered: the audience field in Auth0Provider configuration is crucial for custom claims to work properly.

Auth Architecture

Even if the Auth0 setup includes custom claims in the post-login event, without the correct audience configuration, these claims won't appear in the JWT token. This subtle requirement isn't immediately obvious from the documentation but is essential for proper authorization in Hasura.

Lessons Learned

  1. Design database schema to be authentication-provider-agnostic
  2. Use abstraction layers in the monorepo to isolate authentication implementation details
  3. Pay special attention to JWT token configuration, particularly the audience field

By following these principles, I made a flexible authentication system that's easier to maintain and adapt as the application grows or requirements change.

Deploy Firebase Functions in monorepo with pnpm

· 5 min read
ShinaBR2
Life Developer

Problem

I am a fan of serverless solutions including Firebase Cloud Functions, but until now it still does not natively support monorepo and pnpm. This was a very frustrating development experience. After a few hours of research, trying, failing, and repeating the cycle, at least I can figure out a hack to solve this problem. See the problem here: https://github.com/firebase/firebase-tools/issues/653

Some references that I have read:

Motivation

Thanks to the community, I hope this part will make more sense for the future readers and they can choose the right approach for the right situation.

The problem that I want to solve is deploying the Firebase Cloud Functions in the CI environment. Since we only set up the CI once and CI server will handle things automatically for us.

Some important parts to make things clearer to understand how things work.

The folder structure should be like

root
|- apps
|- api
|- packages
|- core
firebase.json
pnpm-workspace.yaml

The apps/api/package.json should look like this:

{
"name": "api",
"main": "dist/index.js",
"dependencies": {
"firebase-functions": "^4.1.1",
"core": "workspace:*"
}
}

Explanation:

The apps/api/package.json explanation:

  • Field name is MUST since it defines how module resolution works. You may familiar with pnpm command for example pnpm install -D --filter api". The apiis the value of thename` field.
  • Field main describe how NodeJS resolve your code. Let's imagine when reading the code base, NodeJS won't know where to get started if you don't tell it. Set this main value dist/index.js means "Hey NodeJS, look for the file dist/index.js at the same level of the package.json file and run it".

Now let's go to the tricky part!

Hacky solution

Solution: https://github.com/Madvinking/pnpm-isolate-workspace

The idea is, to build all the dependencies into one single workspace with some tweaks in the package.json file since firebase deploy command does not support the pnpm workspace:* protocol. I tested many times in both my local environment and CI server, and as long as the package.json file contains the workspace:* protocol, it will fail even if the code is already built.

Steps:

  • Build Cloud Functions locally, the output will be in apps/api/dist
  • Change the firebase.json source field to "source": "apps/api/_isolated_", and remove the predeploy hook. The predeploy define what command will run BEFORE deploying the Cloud Functions (using firebase deploy command). The reason why I remove this is I already build the code base in the previous step.
  • Run pnpx pnpm-isolate-workspace api at the root folder, it will create the folder name _isolated_.
  • Copy build folder into new created folder cp -r apps/api/dist apps/api/_isolated_
  • Go to the apps/api/_isolated_ run mv package.json package-dev.json
  • Go to the apps/api/_isolated_ run mv package-prod.json package.json
  • Go to the apps/api/_isolated_ run sed -i 's/"core\"\: \"workspace:\*\"/"core\"\: \"file\:workspaces\/packages\/core\"/g' package.json, thanks to this comment
  • Finally, run firebase deploy --only functions at the root folder

Questions?

  • Why do I need to rename two package.json files in the apps/api/_isolated_ folder? The main reason is is removing the devDependencies to reduce manual work for the next step
    • Because the package-prod.json does NOT contains the devDependencies and we don't need devDependencies for the deployment. Other than that, the devDependencies may contain some other packages from my other workspaces.
    • I don't know yet how to let the firebase deploy command using the package-prod.json file instead of package.json
  • What exactly sed command does? Why do I need that?
    • This is the most tricky part. The sed command will read the file, and replace some strings with others, which is a very low level, risky, and not easy to do for everyone. That means it only makes sense when doing this in the CI server since it is isolated to your code base. You never want to see these changes in your git repository.
  • Why not install firebase-tools as a dependency and then run something like pnpm exec firebase deploy in the CI server?
    • It makes sense if you run the firebase deploy command from your local machine. In the CI server, please note that I use this.
  • What actually w9jds/firebase-action does and WHY do I need to use that?
    • The most important part is the "authentication process". To deploy Firebase Cloud Functions, "you" need to have the right permissions. For example in your local machine, you need to run the command firebase login before doing anything, then you need to grant access. The same thing will happen on the CI server, we need to grant the right permissions to the Google Service Account through the GCP_SA_KEY key. In the CI environment, there are no browsers to let you sign in, that's the point. So instead of manually running the command pnpm exec firebase deploy in the CI server, the above w9jds/firebase-action will handle things for you.

Other notes

There are some problems with this approach, please don't think it's a perfect solution, and make sure you fully understand it because it's likely you may touch it again in the future, unfortunately.