Skip to main content
Start here: need to register a service and create a plan first? Follow the 5-minute setup.
Add payment protection to LangChain and LangChain.js tools using the x402 protocol. The library provides two complementary approaches:
ApproachBest forPayment layer
requiresPayment wrapper/decoratorDirect tool invocation, CLI scripts, notebooksPer-tool wrapper
Payment middleware on HTTP serverServing the agent over HTTPHTTP middleware
Both use the same Nevermined plan, credits, and settlement flow — choose whichever fits your deployment model.

Installation

npm install @nevermined-io/payments @langchain/core @langchain/openai zod
The @nevermined-io/payments/langchain sub-path export provides the requiresPayment() wrapper. For the HTTP server approach, also install express.

Approach 1: Tool Decorator / Wrapper

x402 Payment Flow (decorator)

Quick Start

In LangChain.js, requiresPayment() is a higher-order function that wraps the tool implementation:
import 'dotenv/config'
import { tool } from '@langchain/core/tools'
import { z } from 'zod'
import { Payments } from '@nevermined-io/payments'
import { requiresPayment } from '@nevermined-io/payments/langchain'

const payments = Payments.getInstance({
  nvmApiKey: process.env.NVM_API_KEY!,
  environment: process.env.NVM_ENVIRONMENT || 'sandbox',
})

const PLAN_ID = process.env.NVM_PLAN_ID!

// Protect a tool with payment — 1 credit per call
const searchData = tool(
  requiresPayment(
    (args) => `Results for '${args.query}': ...`,
    { payments, planId: PLAN_ID, credits: 1 }
  ),
  {
    name: 'search_data',
    description: 'Search for data on a given topic. Costs 1 credit.',
    schema: z.object({ query: z.string() }),
  }
)
The payment token is read from config.configurable.payment_token. Pass it when invoking the tool or agent.

Invoking with payment

import { Payments } from '@nevermined-io/payments'

// Subscriber side — acquire token
const subscriber = Payments.getInstance({
  nvmApiKey: process.env.NVM_SUBSCRIBER_API_KEY!,
  environment: process.env.NVM_ENVIRONMENT || 'sandbox',
})

const token = await subscriber.x402.getX402AccessToken(PLAN_ID)
const accessToken = token.accessToken

// Invoke tool directly
const result = await searchData.invoke(
  { query: 'AI trends' },
  { configurable: { payment_token: accessToken } }
)

LLM-driven tool calling

import { HumanMessage } from '@langchain/core/messages'
import { ChatOpenAI } from '@langchain/openai'

const llm = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 })
const tools = [searchData, summarizeData, researchTopic]
const llmWithTools = llm.bindTools(tools)
const toolMap = new Map(tools.map((t) => [t.name, t]))

const messages = [new HumanMessage('Search for AI trends')]
const aiMessage = await llmWithTools.invoke(messages)
messages.push(aiMessage)

for (const toolCall of aiMessage.tool_calls || []) {
  const result = await toolMap.get(toolCall.name)!.invoke(
    toolCall.args,
    { configurable: { payment_token: accessToken } }
  )
}

LangGraph ReAct agent

The same payment-protected tools work with LangGraph’s create_react_agent:
import { createReactAgent } from '@langchain/langgraph/prebuilt'

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: 'gpt-4o-mini' }),
  tools: [searchData, summarizeData, researchTopic],
  prompt: 'You are a helpful research assistant.',
})

const result = await agent.invoke(
  { messages: [{ role: 'human', content: 'Research AI agents and summarize' }] },
  { configurable: { payment_token: accessToken } }
)

Dynamic Credits

Three patterns for credit calculation:
// Pattern 1: Static number — always costs 1 credit
const searchData = tool(
  requiresPayment(
    (args) => `Results for ${args.query}`,
    { payments, planId: PLAN_ID, credits: 1 }
  ),
  { name: 'search_data', description: '...', schema: z.object({ query: z.string() }) }
)

// Pattern 2: Arrow function — cost scales with output length
const summarize = tool(
  requiresPayment(
    (args) => `Summary of ${args.text}`,
    {
      payments, planId: PLAN_ID,
      credits: (ctx) => Math.max(2, Math.min(Math.floor(String(ctx.result).length / 100), 10)),
    }
  ),
  { name: 'summarize', description: '...', schema: z.object({ text: z.string() }) }
)

// Pattern 3: Named function — complex logic on args + result
function calcCredits(ctx: { args: Record<string, unknown>; result: unknown }): number {
  const topic = String(ctx.args.topic || '')
  const result = String(ctx.result || '')
  const base = 3
  const keywordExtra = Math.max(0, topic.split(' ').length - 3)
  const outputExtra = Math.floor(result.length / 200)
  return Math.min(base + keywordExtra + outputExtra, 15)
}

const research = tool(
  requiresPayment(
    (args) => `Report on ${args.topic}`,
    { payments, planId: PLAN_ID, credits: calcCredits }
  ),
  { name: 'research', description: '...', schema: z.object({ topic: z.string() }) }
)
The credits function receives { args, result } after tool execution.

Approach 2: HTTP Server with Payment Middleware

For serving the agent over HTTP, use payment middleware on your framework. Payment is handled at the HTTP layer — tools are plain functions with no decorators or payment config.

x402 Payment Flow (HTTP)

Server: LangChain

import 'dotenv/config'
import express from 'express'
import { HumanMessage, ToolMessage } from '@langchain/core/messages'
import { tool } from '@langchain/core/tools'
import { ChatOpenAI } from '@langchain/openai'
import { z } from 'zod'
import { Payments } from '@nevermined-io/payments'
import { paymentMiddleware } from '@nevermined-io/payments/express'

const payments = Payments.getInstance({
  nvmApiKey: process.env.NVM_API_KEY!,
  environment: process.env.NVM_ENVIRONMENT || 'sandbox',
})

const PLAN_ID = process.env.NVM_PLAN_ID!

// Plain tools — no requiresPayment, no config parameter
const searchData = tool(
  (args) => `Results for '${args.query}': ...`,
  { name: 'search_data', description: 'Search for data.', schema: z.object({ query: z.string() }) }
)

const summarizeData = tool(
  (args) => `Summary: ...`,
  { name: 'summarize_data', description: 'Summarize text.', schema: z.object({ text: z.string() }) }
)

// LLM + tools
const llm = new ChatOpenAI({ model: 'gpt-4o-mini', temperature: 0 })
const tools = [searchData, summarizeData]
const llmWithTools = llm.bindTools(tools)
const toolMap = new Map(tools.map((t) => [t.name, t]))

async function runAgent(query: string): Promise<string> {
  const messages: any[] = [new HumanMessage(query)]
  for (let i = 0; i < 10; i++) {
    const ai = await llmWithTools.invoke(messages)
    messages.push(ai)
    if (!ai.tool_calls?.length) return String(ai.content)
    for (const tc of ai.tool_calls) {
      const result = await (toolMap.get(tc.name) as any).invoke(tc.args)
      messages.push(new ToolMessage({ content: result, tool_call_id: tc.id! }))
    }
  }
  return String(messages.at(-1)?.content || 'No response.')
}

// Express app with payment middleware
const app = express()
app.use(express.json())
app.use(paymentMiddleware(payments, {
  'POST /ask': { planId: PLAN_ID, credits: 1 },
}))

app.post('/ask', async (req, res) => {
  const response = await runAgent(req.body.query)
  res.json({ response })
})

app.get('/health', (_req, res) => res.json({ status: 'ok' }))

app.listen(8000, () => console.log('Running on http://localhost:8000'))

Server: LangGraph

Replace the tool-call loop with LangGraph’s create_react_agent:
import { tool } from '@langchain/core/tools'
import { ChatOpenAI } from '@langchain/openai'
import { createReactAgent } from '@langchain/langgraph/prebuilt'
import { z } from 'zod'

// Plain tools — no payment wrappers
const searchData = tool(
  (args) => `Results for '${args.query}': ...`,
  { name: 'search_data', description: 'Search.', schema: z.object({ query: z.string() }) }
)

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: 'gpt-4o-mini' }),
  tools: [searchData, summarizeData],
})

async function runAgent(query: string): Promise<string> {
  const result = await agent.invoke({ messages: [{ role: 'human', content: query }] })
  const messages = result.messages || []
  return messages.at(-1)?.content || 'No response.'
}
The HTTP app, middleware, and route handlers are identical to the LangChain version above.

Client: Full x402 HTTP Flow

import 'dotenv/config'
import { Payments } from '@nevermined-io/payments'

const SERVER_URL = process.env.SERVER_URL || 'http://localhost:8000'
const PLAN_ID = process.env.NVM_PLAN_ID!

const payments = Payments.getInstance({
  nvmApiKey: process.env.NVM_SUBSCRIBER_API_KEY!,
  environment: process.env.NVM_ENVIRONMENT || 'sandbox',
})

// Step 1: Request without token → 402
const resp402 = await fetch(`${SERVER_URL}/ask`, {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ query: 'AI trends' }),
})
console.log(`Status: ${resp402.status}`) // 402

// Step 2: Decode payment requirements
const pr = JSON.parse(Buffer.from(resp402.headers.get('payment-required')!, 'base64').toString())
console.log(`Plan: ${pr.accepts[0].planId}`)

// Step 3: Acquire x402 token
const token = await payments.x402.getX402AccessToken(PLAN_ID)
const accessToken = token.accessToken

// Step 4: Request with token → 200
const resp200 = await fetch(`${SERVER_URL}/ask`, {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'payment-signature': accessToken,
  },
  body: JSON.stringify({ query: 'AI trends' }),
})
const body = await resp200.json()
console.log(`Response: ${body.response}`)

// Step 5: Decode settlement receipt
const settlement = JSON.parse(
  Buffer.from(resp200.headers.get('payment-response')!, 'base64').toString()
)
console.log(`Credits charged:   ${settlement.creditsRedeemed}`)
console.log(`Remaining balance: ${settlement.remainingBalance}`)
console.log(`Transaction:       ${settlement.transaction}`)

x402 HTTP Headers

HeaderDirectionDescription
payment-signatureClient → Serverx402 access token
payment-requiredServer → Client (402)Base64-encoded payment requirements
payment-responseServer → Client (200)Base64-encoded settlement receipt
The settlement receipt (payment-response) contains:
FieldDescription
creditsRedeemedNumber of credits charged
remainingBalanceSubscriber’s remaining credit balance
transactionBlockchain transaction hash
networkBlockchain network (CAIP-2 format)
payerSubscriber wallet address

Decorator Configuration

With Agent ID

const myTool = tool(
  requiresPayment(
    (args) => `Result for ${args.query}`,
    {
      payments,
      planId: PLAN_ID,
      credits: 1,
      agentId: process.env.NVM_AGENT_ID,
    }
  ),
  { name: 'my_tool', description: '...', schema: z.object({ query: z.string() }) }
)

Multiple Plans (Python only)

@tool
@requires_payment(
    payments=payments,
    plan_ids=["plan-basic", "plan-premium"],
    credits=1,
)
def my_tool(query: str, config: RunnableConfig) -> str:
    ...

Scheme and Network

const myTool = tool(
  requiresPayment(
    (args) => `Result`,
    { payments, planId: PLAN_ID, credits: 1, network: 'eip155:84532' }
  ),
  { name: 'my_tool', description: '...', schema: z.object({ query: z.string() }) }
)
The decorator/wrapper automatically detects the payment scheme from plan metadata. Plans with fiat pricing (isCrypto: false) use nvm:card-delegation (Stripe). No code changes are needed on the agent side.

Complete Examples

Working seller/buyer agents with LangGraph — includes both Python and TypeScript variants:
Each includes:
  • src/server.ts / src/agent.ts — LangGraph createReactAgent with payment-protected tools
  • src/demo.tsrequiresPayment wrapper demo (seller only)
  • src/client.ts — HTTP client with full x402 payment flow

Environment Variables

# Nevermined (required)
NVM_API_KEY=sandbox:your-api-key         # Builder/server API key
NVM_SUBSCRIBER_API_KEY=sandbox:your-key  # Subscriber/client API key
NVM_ENVIRONMENT=sandbox
NVM_PLAN_ID=your-plan-id
NVM_AGENT_ID=your-agent-id              # Optional

# LLM Provider
OPENAI_API_KEY=sk-your-openai-key

Next Steps