AIronClaw Functions Catalog: pre-built gateway lambdas
A 14-recipe Lua lambda catalog for AI gateways: response shaping, request preprocessing, error normalization. One-click install on MCP or LLM proxies.
TL;DR: AIronClaw Functions are small in-gateway transforms that fix the recurring problems of agent traffic: bloated tool responses, leaked secrets, missing tenant context. This catalog is a pre-built library of recipes that drop straight onto any LLM or MCP proxy. The post shows two walkthroughs — stripping base64 image blobs from a response, and injecting tenant identity into an n8n webhook.
Teams running AI agents in production keep hitting the same problems. Tool responses are bloated with data the model can't really use. Conversation history keeps growing, and the token bill grows with it. Sensitive data leaks from internal tools straight into the LLM context. Identity and tenant info from the API key never reaches the workflow that needs it. The fix is the same every time: a small Lua transform at the gateway. The AIronClaw Functions Catalog ships 14 of those transforms ready to install. One click on any MCP or LLM proxy. Editable after. This post walks the catalog and goes deep on two recipes.
What an AIronClaw Function is
A Function is a small Lua script attached to an MCP or LLM proxy. The gateway runs it at one of two points: before the request goes to the upstream (access phase), or before the response goes back to the client (response phase).
Inside the script you get an aifw global table. You use it to read or change the request, the response, and a small bag of context: the caller's API key tags, the tool name, the proxy UUID, the original Authorization header. No external I/O. No filesystem, no socket, no os.date. Just transforms over the bytes the gateway already has.
Functions are how you handle policies that don't fit a typed rule. AIronClaw already ships rate_limit, ip_acl, response_replace, prompt_guard, static_cache, tool_description_inject. When you need something more (coercing types in a response, deriving a field from another, normalizing error envelopes, propagating tenant identity into a webhook), Functions are how you ship it without forking the gateway.
A typical Function is fifteen to sixty lines of Lua. It does one thing and stops. The catalog this post walks is fourteen of those, pre-written and tested.
What's in the AIronClaw Functions Catalog
The 14 entries split into three groups.
Response shaping (7 recipes). Run on the tool result before it reaches the agent. Strip empty and null fields (typical token saving 20-40% on chatty MCPs). Truncate long arrays with a {shown, total, hint} pagination marker. Replace base64 blobs over a configurable size with a <binary, X bytes, sha256:...> placeholder. Convert snake_case to camelCase. Coerce stringified primitives back to native types, by field name. Derive account_age_days from a numeric created_at. Strip n8n's workflowData execution-metadata envelope.
Request preprocessing (6 recipes). Run on the request before it goes upstream. Inject defaults for missing optional arguments. Expand placeholders like {{caller}}, {{tenant}}, {{request_id}}, {{epoch}} in argument strings, using the caller's API-key tags. Inject _tenant_id, _environment, and _caller into the n8n webhook payload from tenant: and env: tags. Three LLM-specific recipes: prepend a per-tenant system message, downgrade the model to a cheaper fallback when the caller doesn't have tier:gold, and cap messages[] to the last N turns so the token bill doesn't keep growing as conversations get long.
Error normalization (1 recipe). Wrap any non-JSON upstream response (HTML error page, plain text, empty body) into a JSON-RPC error envelope. The agent then gets one consistent error shape no matter what the upstream did.
Every recipe shares the same sandboxed aifw surface (aifw.context, aifw.json, aifw.re, aifw.hash, aifw.uuid, plus a few more) and runs in either the request or the response phase of a per-proxy rule.
How do you strip base64 image blobs from MCP responses?
A common shape: an MCP tool (a chart renderer, a receipt OCR, a screenshot grabber, a document-to-image previewer) returns its structured output along with a binary preview encoded as base64.
{
"result": {
"content": [
{
"type": "text",
"text": "[\n {\n \"chart_id\": \"q3-revenue\",\n \"summary\": \"Q3 revenue up 12% over Q2.\",\n \"preview\": \"data:image/png;base64,iVBORw0KGgo...(20480 bytes)...\"\n }\n]"
}
]
},
"jsonrpc": "2.0",
"id": 3
}
The 20KB of base64 enters the agent's context. The model can't really use it (it's a binary image as a string, not a vision-decoded picture), but it pays tokens for it on the way in and on every later turn the conversation includes.
The base64-blob-placeholder recipe is one regex substitution on the response body:
local THRESHOLD = 256
local raw = aifw.context.response_body
if not raw or #raw == 0 then return end
local pattern = string.format("[A-Za-z0-9+/=]{%d,}", THRESHOLD)
local rewritten, n = aifw.re.gsub(raw, pattern, function(m)
local b64 = m[0]
local decoded = aifw.base64.decode(b64)
local size = decoded and #decoded or #b64
return string.format("<binary, %d bytes, sha256:%s>",
size, string.sub(aifw.hash.sha256(b64), 1, 16))
end, "o")
if n and n > 0 then
aifw.context.response_body = rewritten
end
The agent now sees:
"preview": "data:image/png;base64,<binary, 20480 bytes, sha256:e097c1c4d8927549>"
The data:image/png;base64, prefix stays. The agent can still tell what kind of attachment was there. The base64 payload is replaced. The structure around it is intact. The token count for this field drops from a few thousand to under a hundred.
Two caveats. First, the regex looks at content patterns, not field names. It matches any run of base64-alphabet characters at least 256 chars long. False positives at that length are rare in practice (regular text has spaces and punctuation that break the run), but possible. Second, the recipe can't tell whether an attachment was actually meant for the model to interpret (a vision-described image) or just a side artifact (a debug screenshot the upstream attached). For the rare case where the agent needs the bytes, scope the rule to specific tools instead of *.
How do you inject tenant context into n8n webhooks?
The second recipe is more architectural. n8n MCP Trigger workflows run as webhooks. When the gateway forwards a tools/call, the webhook node receives the call's arguments as $json and triggers the workflow. The agent has to pass everything the workflow needs.
n8n-context-inject reads tenant: and env: tags from the caller's API-key permissions and injects them into params.arguments before forwarding upstream:
local body = aifw.context.body
if not body or body.method ~= "tools/call" then return end
body.params = body.params or {}
body.params.arguments = body.params.arguments or {}
local perms = aifw.context.auth
and aifw.context.auth.api_key
and aifw.context.auth.api_key.permissions
or {}
local tenant_id, environment
for _, t in ipairs(perms) do
local tn = aifw.re.match(t, "^tenant:(.+)$")
if tn then tenant_id = tn end
local en = aifw.re.match(t, "^env:(.+)$")
if en then environment = en end
end
body.params.arguments._tenant_id = tenant_id
body.params.arguments._environment = environment
body.params.arguments._caller = (aifw.context.auth
and aifw.context.auth.api_key
and aifw.context.auth.api_key.name) or nil
aifw.context.body = body
In the n8n webhook node, $json._tenant_id is now reliably set from the API key, not from agent input. The workflow can branch on tenant without trusting the model. The agent doesn't have to know the tenant id. It can't spoof another tenant's value (the gateway writes the field). It can't forget to pass it.
This is the kind of rule that's tedious to write from scratch every time but obvious once you've seen it. That's the point of having a catalog.
How a recipe lands on a proxy
Each catalog entry is a record { id, title, description, category, appliesTo, defaultPhase, defaultTools, code }. The Install button builds a lambda rule with lua_code = entry.code, tags it with function_catalog_id = entry.id, and PUTs it onto the proxy's rules array. (Same endpoint hand-rolled rules use.) After install the rule shows up in the per-proxy Functions page with a small Catalog: <title> badge. It's editable like any other rule.
One thing worth flagging. Spec-compliant MCP tool responses wrap the actual payload as a JSON-stringified blob inside result.content[*].text. A naive lambda that walks data.result directly sees only the envelope, not the payload. The response-shaping recipes (strip-empty-fields, truncate-arrays, snake-to-camel, coerce-primitives, account-age-days, n8n-metadata-strip) all unwrap the inner JSON, transform, and re-encode. They fall back to walking data.result for non-content-shaped responses. The base64-blob-placeholder recipe is content-agnostic on purpose: it runs aifw.re.gsub on the raw body and doesn't care about the JSON shape.
For MCP servers running in async-mode (POST returns 202 and the result is delivered on the open SSE stream, as n8n MCP Trigger does), the gateway parses the SSE stream per-event and runs the same response-phase pipeline against each result line. The recipe code is unchanged across the two transports.
What's next
Two items on the catalog roadmap. First, install-time parameters. Most recipes have a tunable knob (THRESHOLD bytes, MAX_TURNS, the list of NUMERIC field names) that today requires editing the Lua after install. A small parameter form on the install modal will surface those without going to the editor. Second, response-cache integration. Pairing a catalog recipe with static_cache so the transform happens once per cache slot, not per request.
If the recipe you want isn't in the catalog, the Install button is also a "go write it yourself" path: pick a similar entry, open the Lua, edit. The sandboxed surface is documented at /docs/functions. You don't need to leave the dashboard to ship one.