How to calculate the border-radius of nested elements

Published at:Published at:Updated at:

Tags:CSSHTML

The border-radius property allows you to round the edges of an element. Giving the same border-radius value to the parent element and child element doesn’t result in the best appearance, so how do you calculate the border radius of nested elements?

Check out the demo below:

In the example above, the two circles have the same radius and are inscribed in squares with rounded edges. Note that the radius of the border is the same as the radius of the circle. We want the end of the arc of the circle each edge starts to be coincident with both squares. This can be done in two ways:

  1. Radius of the border of the inner square (Ri) + spacing between squares (E) = radius of the border of the outer square (Re);
  2. Outer square border radius (Re) - spacing between squares (E) = inner square border radius Ri.

Note that the center of the circles inscribed in the squares do not coincide, as well as the end of the border arc. Still, the result is satisfactory.

Bonus

In this interactive example, made by Jhey Tompkins, you can see how this rule applies in practice:

References

The minimum you need to know to test your APIs with CURL

Published at:Published at:Updated at:

CURL is a command-line tool that allows you to transmit data with URL syntax, supporting a myriad of protocols (DICT, FILE, FTP, FTPS, GOPHER, GOPHERS, HTTP, HTTPS, IMAP, IMAPS, LDAP, LDAPS, MQTT, POP3, POP3S, RTMP, RTMPS, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET, TFTP, WS and WSS). In this article, I will focus on using CURL to make HTTP requests to APIs, which, at least for me, is the most common use.

Installation

CURL is already installed on most Linux distributions and recent versions of Windows. To check if it’s installed, just run the curl command in the terminal. If you don’t have CURL installed, you can install it with the command sudo apt install curl (Ubuntu/Debian) or sudo yum install curl (CentOS/Fedora) or winget install curl (Windows).

Oh, and as it is common for us to work with REST APIs in web development, another command-line tool that will be useful to us is jq, which serves to format JSON in the terminal. To install jq, just run the command sudo apt install jq (Ubuntu/Debian) or sudo yum install jq (CentOS/Fedora) or winget install jqlang.jq (Windows).

Our example API

For didactic purposes, I will use DummyJSON as an API.

Making a GET request

To make a GET request, just run the curl command followed by the URL you want to access. For example, to request data for product 1, just run the command curl https://dummyjson.com/products/1.

And, to format the output, just add a | jq at the end of the command:

curl https://dummyjson.com/products/1 | jq

Making a POST, PUT, PATCH or DELETE request with JSON in the body

To make a POST request, just run:

curl --json '{"title": "New product"}' https://dummyjson.com/products/add

The curl will take care of adding the headers Content-Type: application/json and Accept: application/json. If you want to make a PUT, PATCH or DELETE request, add the -X option followed by the HTTP method you want to use. For example, to make a PUT, run:

curl -X PUT --json '{"title": "New title"}' https://dummyjson.com/products/1

You can also send a JSON file instead of typing the JSON in the terminal by putting an @ in front of the file name:

curl --json @arquivo.json https://dummyjson.com/products/add

Or passing data from stdin (note that I use @- instead of @ to indicate that the data will come from stdin):

curl --json @- https://dummyjson.com/products/add < file.json

Making a request with headers

To make a request with headers, just run the curl command followed by the URL you want to access, and the -H option followed by the header you want to send. So, to send a Bearer Token, you would run the following command:

curl -H "Authentication: Bearer token" --json '{"title": "New product"}' https://dummyjson.com/products/add

Some Exercises

Julia Evans published a few exercises to help you become fluent in curl. It should be worth to take a look at this post in her blog.

References

Creating native modals with the dialog element

Published at:Published at:Updated at:

Using custom dialog elements instead of native browser implementations, such as alert, confirm, or prompt, has become the standard for web development for quite some time (popularized by various jQuery plugins and by Bootstrap itself), so that with every new UI library that emerges[1][2][3], it is common for its authors to re-implement a modal with the framework of the moment (which may or may not implement WAI-ARIA accessibility).

But now, with the arrival of the <dialog> element in HTML5 (supported by 93.85% of browsers in use), it is much easier to create dialogs natively. In this article, we will see how to create a simple modal (and non-modal) dialog with the <dialog> element.

Understanding the dialog element

In the sense employed in user interface development, a dialog is a conversation between the system and the user, where the system expects a response from the user to continue. A dialog can be modal or non-modal. A modal dialog (that is, one that changes the mode of interaction of the user with the system) is one that locks the interface, preventing the user from interacting with the rest of the page until it is closed. A non-modal dialog (that is, one that does not change the mode of interaction of the user with the system), on the other hand, allows the user to interact with the rest of the page while the dialog is open.

The simplest way to put a non-modal dialog on the screen is as follows:

<dialog open>
  <p>Olá, mundo!</p>
  <form method="dialog">
    <button>Fechar</button>
  </form>
</dialog>

Note the form, on line 5, with the dialog method. It is this form that sends actions to the dialog. It will be displayed like this:

What makes the example above a non-modal dialog is the use of the open attribute (line 1), which also makes it unable to be closed with the Esc key. It’s possible to create a non-modal dialog using the JavaScript API:

In order for it to behave like a modal, it is necessary to open it through its JavaScript API, as we will see next.

This time, we open and close the modal with JavaScript and put the form result in the output element when the modal is closed. Read the code carefully and try to understand what is happening.

Styling the modal

The dialog element can (of course), be styled like any other HTML element. However, note that, to style the overlay (the dark background behind the modal), it is necessary to use the ::backdrop selector:

Polyfill

If you want to use dialog and don’t have compatibility issues in older browsers, you can use this polyfill.


References


  1. Material UI Modal ↩︎

  2. Ant Design Modal ↩︎

  3. Carbon Design System Modal ↩︎

Using fetch with TypeScript

Published at:Published at:Updated at:

Since fetch) is pratically universally supported on the most used web browsers, we may safely drop the use Axios and other similar libraries in favor of fetch. In this article, I’ll create a little wrapper for fetch that adds some conveniences to improve the developer experience.

The code

First, I will create a base function from where all the other shall be derived:

// Extends the error class to throw HTTP Errors (any response with status > 299)
class HTTPError extends Error {}

//            A generic type to type the response
// -----------\/
const query = <T = unknown>(url: RequestInfo | URL, init?: RequestInit) =>
  fetch(url, init).then((res) => {
    if (!res.ok)
      throw new HTTPError(res.statusText, { cause: res })

    return res.json() as Promise<T> // <--- Applying the generic type above
  })

In the code above, we:

  1. Created a new HTTPError class, in order to throw HTTP Errors as they appear;
  2. Use a generic type in order to be able to type the response of the request.

Now, let’s extend the query function to enable us to serialize and send data on our requests:

const makeRequest
  // -----------\/ RequestInit['method'] is a union of all the possible HTTP methods
  = (method: RequestInit['method']) =>
    //     | Those two generic types enables us to type the
    // \/--  data input (TBody) and output (TResponse) of the function.
    <TResponse = unknown, TBody = Record<string, unknown>>(url: RequestInfo | URL, body: TBody) =>
      query<TResponse>(url, {
        method,
        body: JSON.stringify(body), // <-- JSON Stringify any given object
      })

In the code above, we:

  1. We build a closure that, first, receive the method we want to call and then returns a function where we pass the url and the body (which is, by default, JSON-stringified) of the request.

At this point, we can use our newly created functions like this:

// Adding type for the Product entity
type Product = {
  id: number
  title: string
  description: string
  price: number
  discountPercentage: number
  rating: number
  stock: number
  brand: string
  category: string
  thumbnail: string
  images: string[]
}

// Getting a single product
const product = await query<Product>('https://dummyjson.com/products/1')
console.log(product)

// Creates a function that makes POST requests
const post = makeRequest('POST')

// Adding a new product
const newProduct = await post<Product, Omit<Product, 'id'>>('https://dummyjson.com/products', {
  title: 'New Product',
  description: 'This is a new product',
  price: 100,
  discountPercentage: 0,
  rating: 0,
  stock: 0,
  brand: 'New Brand',
  category: 'New Category',
  images: [],
  thumbnail: '',
})

console.log(newProduct)

Fully functional, but not very “ergonomic”. I believe that our code should also be able to accept a base URL for all the requests, make it easier to add things on the header (like an authorization token) and an easy way to make PATCH, PUT and DELETE requests.

Let’s refactor the code above in order to make it easy to add a base URL and pass a common header to all requests:

import { getToken } from 'my-custom-auth'

class HTTPError extends Error {}

const createQuery =
  (baseURL: RequestInfo | URL = '', baseInit?: RequestInit) =>
    <T = unknown>(url: RequestInfo | URL, init?: RequestInit) =>
      fetch(`${baseURL}${url}`, { ...baseInit, ...init }).then((res) => {
        if (!res.ok)
          throw new HTTPError(res.statusText, { cause: res })

         return res.json() as Promise<T>
       })

// This is the function where we define our base URL and headers
const query = createQuery(
  'https://dummyjson.com',
  {
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${getToken()}`, // If you need to add a token to the header, you can do it here.
    },
  })


const makeRequest = (method: RequestInit['method']) =>
  <TResponse = unknown, TBody = Record<string, unknown>>(url: RequestInfo | URL, body: TBody) =>
    query<TResponse>(url, {
      method,
      body: JSON.stringify(body),
     })

export const api = {
  get: query,
  post: makeRequest('POST'),
  delete: makeRequest('DELETE'),
  put: makeRequest('PUT'),
  patch: makeRequest('PATCH'),
}

In the code above, I:

  1. Created a createQuery function, a closure where I can set a default url and init parameters;
  2. Created a new query function, where I use the createQuery function to define the base URL and the default parameters that all requests should have (note the dummy getToken function that adds a Bearer Token to each request);
  3. In the end, I export the api object all the commonly used function to make requests.

You may want to return the body of a request that returned an error, like, for example, when your backend returns the standardized problem details. So, the refactored code would be:

import { getToken } from 'my-custom-auth'

// Extends the return of the HTTPError class
class HTTPError extends Error {
  readonly response: any;
  readonly status: number;
  readonly statusText: string;

  constructor(status: number, statusText: string, response: any) {
    super(statusText);
    this.status = status;
    this.statusText = statusText;
    this.response = response;
  }
}

const createQuery =
  (baseURL: RequestInfo | URL = '', baseInit?: RequestInit) =>
    <TResponse = unknown>(url: RequestInfo | URL, init?: RequestInit) =>
      fetch(`${baseURL}${url}`, { ...baseInit, ...init }).then(async (res) => {
        // Now, we get the JSON response early
        const response = await res.json()

        if (!res.ok)
          throw new HTTPError(res.status, res.statusText, response);

         return response as TResponse
       })

// In this function, we define our base URL and headers.
const query = createQuery(
  'https://dummyjson.com',
  {
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${getToken()}`, // If you need to add a token to the header, you can do it here.
    },
  })


const makeRequest = (method: RequestInit['method']) =>
  <TResponse = unknown, TBody = Record<string, unknown>>(url: RequestInfo | URL, body: TBody) =>
    query<TResponse>(url, {
      method,
      body: JSON.stringify(body),
     })

export const api = {
  get: query,
  post: makeRequest('POST'),
  delete: makeRequest('DELETE'),
  put: makeRequest('PUT'),
  patch: makeRequest('PATCH'),
}

And now, you can use your new wrapper around fetch like this:

type Product = {
  id: number
  title: string
  description: string
  price: number
  discountPercentage: number
  rating: number
  stock: number
  brand: string
  category: string
  thumbnail: string
  images: string[]
}

// GET https://dummyjson.com/products/1
api
  .get<Product>('/products/1')
  .then(console.log)
  .catch((err) => {
    if (err instanceof HTTPError) {
      // Handle HTTP Errors
      console.error('HTTPError', err);
    }

    if (err instanceof SyntaxError) {
      // Handle error parsing of the response
      console.error('SyntaxError', err);
    }

    console.error('Other errors', err);
});

Final thoughts

The code above is not full-featured as Axios, redaxios, ky or wretch, but, most of the time, it is all need when I’m working with React using SWR or TanStack Query (and on the backend too). Give me your thoughts about the code and show me your improvements (if you want). You can access this code on this gist.

How to push an empty commit on Git?

Published at:Published at:Updated at:

Tags:Git

Have you ever had to run a CI/CD pipeline that is triggered by a commit, when there is no code changes to be commited?

Well, just use the command below:

git commit --allow-empty -m "ci: trigger pipeline with an empty commit"

And then, just push the commit to the remote repository:

git push

That’s it!

New year, new blog (or how I created this blog for 2023)

Published at:Published at:Updated at:

New year, new blog! After delaying the publication of my blog for a long time, I finally finished developing it using Next.js, PocketBase, and Mantine. Want to know why I chose these tools? Then, keep reading here with me.

I’ve been creating blogs for a long time (since 2007). I started with Blogger, but then I migrated to WordPress. And that’s when I started to be interested in Linux and programming. I spent a lot of time creating themes, customizing plugins, reading documentation, and translating themes and plugins for WordPress. And, although WordPress is an excellent CMS for those who just want to publish a website as quickly as possible, this time I wanted something more personalized, containing all the features I would like to have and nothing more. From there, I started researching.

I tried several CMSs (Directus, KeystoneJS, Strapi and Cockpit), but what I found most simple to meet my need was PocketBase, mainly because I intended to self-host my solution. The other CMSs are great, but when you’re a team of one, you have to choose the right tools. And what’s easier for one person to manage than an SQLite database? PocketBase already exposes database updates in real time with SSE, provides authentication and file management (with integration with S3), SDK for JavaScript and Flutter, and can even be used as a framework. All this within a small binary written in Go (if you want to know more about PocketBase, read the documentation and watch this video from FireShip, where he shows how to create a real-time chat system with PocketBase). And finally, in order to have real-time backups of my SQLite database and send them to S3, I use Litestream. Well, having made the choice for the backend, let’s move on to the frontend.

I tried Astro (which is excellent!) and Remix, but I ended up choosing Next.js, mainly because of the Vercel image generation library, which I use to generate images of the post, like this one:

The job that's never started as takes longest to finish

And here we come to the choice of what I would use to create the styles of the blog. In recent years, I styled React applications with CSS Modules, Styled Components, Stitches, Tailwind and Chakra UI. I even stated to create a Design System with Stitches and Tailwind, but create an entire Design System all by myself would take a long time, so, I decided to take the shorter route.

I have tried a few libraries until I found Mantine, which is an excellent library packaged with everything I wanted to use. From there, the work consisted of implementing the blog with the basic initial features:

  • Incremental Static Regeneration of posts;
  • Form validation with Zod;
  • Nested comment system with anti-spam verification provided by Akismet;
  • Display of commentator avatars with Gravatar;
  • SVG Favicon with light/dark mode;
  • I18n (Portuguese and English).

With all that ready, I changed the canonical URLs of my articles on Dev.to to point to the new URLs and finally published my blog.

Of course, if you’re reading this on my blog now, you’ll see that an important feature is still missing: search. I’ll be studying possible solutions for this in the coming days, but I’ll already let you know that you can preview the functionality by pressing the / key on any page.

Happy 2023, everyone 🎉.

Introduction to GraphQL

Published at:Published at:Updated at:

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data, developed by Facebook in 2012 and open sourced in 2015. The goal was to create a query language that allowed fine-grained control over the needed data a client can request to an API server.

A GraphQL service is created by defining types and fields to those types in a schema. A commom way to defining the schema of your GraphQL service is through the GraphQL Schema Definition Language (SDL).

In this article, I’ll show how to create a GraphQL schema compliant with the Relay GraphQL Server specification.

Defining your schema in GraphQL

A GraphQL schema should informe the users about all the types and objects that can be queried and mutated on the graph. GraphQL even provides a feature to query metadata about those types and objects, which can be used to document the GraphQL.

Let’s define a simple schema using GraphQL SDL (Schema Definition Language):

"""
Serialize and validate JavaScript dates.
"""
scalar Date

"""
Node interface
"""
interface Node {
  id: ID!
}

"""
Priority level
"""
enum Priority {
  LOW
  MEDIUM
  HIGH
}

"""
A task that the user should complete
"""
type Task implements Node {
  "Task ID"
  id ID! # ID is a GraphQL special type for IDs.
  "Task title"
  title String!
  "Task creation date"
  created: Date!
  "Task modification date"
  modified: Date
  priority: Priority!
}

"""
Needed input data to add a new task.
"""
input AddTaskInput {
  title: String!
  priority: Priority
}

type Query {
  "Get a task by ID"
  task(id: ID!): Task
}

type Mutation {
  addTask(input: AddTaskInput): Task
}
  1. First we define a custom Date scalar that should validate and serialize Date objects;
  2. We define a Node interface. I’ll explain more on why I’m defining this interface the next topic;
  3. We define an enumeration type to define the valid priority status of a task;
  4. We create our Task type with all the field it should contain. Notice that all field with the exclamation mark at the end are obligatory;
  5. We add an input called AddTaskInput that defines the obligatory data to add a new Task;
  6. In the Query type (which is a GraphQL reserved type), we define what queries are available from our root object;
  7. In the Mutation type (which is a GraphQL reserved type), we define which operations to alter our data are available. Such operations are called mutations.

Notice that, in GraphQL, comments between quotes serve as documentation (it’ll be parsed and displayed in your GraphiQL web documentation interface), while the comments that start with # are ignored.

Querying your data in GraphQL

Tipically, you’d query a GraphQL server like this:

{
  task(id: "2") {
    title
  }
}

Which would return the following, in JSON format:

{
  "data": {
    "task": {
      "title": "Write GraphQL tutorial"
    }
  }
}

In the query above, we started with a special “root” object, from where we select the task field with the id equals to 2 Then, we select the title field from the task object. But, if no task has an id equals to 2? In this case, our response would be:

{
  "data": {
    "task": null
  }
}

Or, in case of a error, we would receive this response:

{
  "data": {
    "task": null
  },
  "errors": [
    {
      "message": "Internal server error."
    }
  ]
}

You may want rename a field before using your data. Well, you can create your aliases just like this:

{
  todo: task(id: "2") {
    name: title
  }
}

And that would be the return:

{
  "data": {
    "todo": {
      "name": "Write GraphQL tutorial"
    }
  }
}

GraphQL also provides the feature of create query fragments and setting up directives to query your data. I’ll need to add more complexity to our current schema in order to explain that, so, for while, let’s move to the next topic.

The Relay GraphQL Server specification

Despite you may not want to use Relay (or even React) to consume your GraphQL data, their specification is very useful and provides a common ground of what developers should expect from a GraphQL server.

Remember that Node interface we defined above? Its purpose is to provide a global object identification for all the GraphQL nodes in our server. Therefore, a GraphQL client can handle re-fetching and caching in a more standardized way. Notice that each ID must be globally unique on your application.

As the Node interface will be used for all objects in our server, GraphQL provides a reusable unit called fragment. Now, let’s add a new way the query nodes on our schema:

# ...

type Query {
  "Get a node by ID"
  node(id: ID!): Node
}

# ...

Notice that the task query was removed, as it is no more needed. And now, we will re-do our query using a fragment:

# We name the query and pass a variable
# to improve the development experience.
query getTask(id: ID!) {
  node(id: $id) {
    ...taskFields
  }
}

fragment taskFields on Task {
  title
}

And now, we will change our schema to comply with the Relay GraphQL Server specification. Take some time to read the comments in order to understand what is being done here.

"""
Serialize and validate JavaScript dates.
"""
scalar Date

"""
Node interface
"""
interface Node {
  id: ID!
}

"""
Priority level
"""
enum Priority {
  LOW
  MEDIUM
  HIGH
}

"""
A task that the user should complete
"""
type Task implements Node {
  "Task ID"
  id ID! # ID is a GraphQL special type for IDs.
  "Task title"
  title String!
  "Task creation date"
  created: Date!
  "Task modification date"
  modified: Date
  priority: Priority!
}

"""
Define an edge of the task,
containing a node and a pagination cursor.
"""
type TaskEdge {
  cursor: String!
  node: Task
}

"""
Define a connection between the
task edges, including the PageInfo
object for pagination info.
"""
type TaskConnection {
  edges: [TaskEdge] # Yes, we use brackets to define arrays in GraphQL
  pageInfo: PageInfo!
}

"""
Provides pagination info
for a cursor-based pagination
"""
type PageInfo {
  hasNextPage: Boolean!
  hasPreviousPage: Boolean!
  startCursor: String
  endCursor: String
}

"""
Needed input data to add a new task.
"""
input AddTaskInput {
  title: String!
  priority: Priority
}

type Query {
  node(id: ID!): Node
  tasks(
    first: Int, # The amount of tasks requested
    after: String # Cursor to mark the point
  ): TaskConnection
}

type Mutation {
  addTask(input: AddTaskInput): Task
}

At this point, the metaphor of graphs used here should be very clear. Each edge of your graph has a node and a connection of edges has a collection of nodes that can be paginated. Note that, in this specification, is expected that you implement a cursor based pagination, rather than a offset pagination (follow the previous link to have more information about their differences.

And that’s all we need to comply with the Relay GraphQL Server Specification.

In the next article, I’ll implement a GraphQL server using all the concepts that we learned here.

Source:

GraphQL

What is a first-class citizen in computer science?

Published at:Published at:Updated at:

In computer science, a first-class citizen is an entity that supports all operations available to other entities. Some of the available operations are:

  • They may be named by variables;
  • They may be passed as arguments to procedures;
  • They may be returned as the results of procedures;
  • They may be included in data structures.

It was the British computer scientist Christopher Strachey (1916-1975) who first coined this notion of first-class citizen status of elements in a programming language in the 1960s.

In JavaScript, for example, functions are first-class citizens, as all of the operations cited above can be applied to them. Let’s see some examples:

A simple function definition in JavaScript

function sum(a, b) {
  return a + b
}

Assigning a constant to a function

const sum = (a, b) => a + b

// or
// 
// const sum = function (a, b) {
//   a + b
// }

Passing a function as an argument

function sum(a, b, callback) {
  const result = a + b

  if (typeof callback === 'function') {
    callback(result) // pass the result as an argument of `callback`
  }

  return result
}

//        Pass `console.log` as the callback function
// -------\/
sum(2, 2, console.log) // => 4

Return a function

function sum(a, b, callback) {
  const result = a + b

  if (callback) {
    return () => callback(result)
  }

  return result
}

//            The callback is the sum of the result with 2.
// ------------------\/
const fn = sum(2, 2, (result) => sum(2, result))
//    ^---- Store the returned function in a variable

//          Execute the function
// ---------\/
console.log(fn()) // => 6

Including a function in a data structure

// Store the basic operations in an object
const operations = {
  sum: (a, b) => a + b,
  sub: (a, b) => a - b,
  mul: (a, b) => a * b,
  div: (a, b) => a / b,
}

What is a RPC (Remote Procedure Call)?

Published at:Published at:Updated at:

A remote procedure call (RPC) is a mechanism of communication between two computational environments, where one can be identified as a client, while the other can be identified as a server.

From the client’s point of view, the RPC is just a matter of calling a function with the desired arguments and await for the response, in order to continue the program’s execution.

Diagram on how a RPC (remote procedure call) works

Thus, using an RPC allows one programmer to distribute the system, according to their needs.

References:

Use GitHub actions to publish your package on NPM

Published at:Published at:Updated at:

Recently, I created a package with the ESLint settings I like to use in my React projects, as I was tired of always having to configure it when I start new React projects. Publishing a NPM package is just a matter of running npm publish on the directory of your package (considering, of course, that you already have an NPM account and is authenticated on your terminal). But I wanted to automatize this publishing everytime I created a new release.

In order to do that, I used the following GitHub Action:

# File: .github/workflows/npm-publish.yml

# This workflow will publish a package to NPM when a release is created
# For more information see: https://help.github.com/actions/language-and-framework-guides/publishing-nodejs-packages

name: Publish Package to npmjs

on:
  release:
    types: [created]

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - uses: actions/setup-node@v3
        with:
          node-version: 16
          registry-url: https://registry.npmjs.org/
      - run: npm publish
        env:
          NODE_AUTH_TOKEN: ${{secrets.NODE_AUTH_TOKEN}}

If you read the YAML file above (that you should put on the .github/workflows/npm-publish.yml directory of your git repository), you should have noted that the environment variable NODE_AUTH_TOKEN should be defined. Create a new automation access token on the control panel of NPM:

  1. Access your NPM account and click in “Access tokens”: Access tokens on NPM

  2. Name your new access token and select the “Automation” type for it:

Creating access token on NPM

  1. Go to your GitHub repository, click in “Settings > Secrets > Actions > New repository secret”, name it as NODE_AUTH_TOKEN and paste the access token you just got from NPM:

Create a new secret on the GitHub repository

  1. Create a new release for your package. This should trigger our GitHub Action and publish to NPM.

Creating a new release on GitHub