2026-03-29
I'm happy to release Proxelar 0.3.0, which adds the feature I've wanted since the beginning: Lua scripting. You can now write simple scripts that intercept, modify, block, or mock HTTP traffic as it flows through the proxy.
Until now, Proxelar could capture and display traffic, but it couldn't change it. That made it a viewer, not a tool. With scripting, Proxelar becomes programmable — you can inject headers, block ad domains, mock API endpoints, rewrite responses, strip cookies, and anything else you can express in a few lines of Lua. This is the single biggest feature since the 0.2.0 rewrite.
Create a Lua script that defines on_request and/or on_response — both are optional. Pass it to Proxelar with --script:
proxelar --script my_script.lua
That's it. The proxy loads the script at startup, and every request and response flows through your hooks before being forwarded. Here's the full API:
-- Called before forwarding the request to the upstream server.
-- Return the request table to forward it (modified or not).
-- Return a response table to short-circuit (the request never reaches upstream).
-- Return nil to pass through unchanged.
function on_request(request)
-- request.method "GET", "POST", ...
-- request.url "https://example.com/path?q=1"
-- request.headers { ["host"] = "example.com", ... }
-- request.body string (may contain binary data)
end
-- Called before returning the response to the client.
-- Return the response table (modified or not), or nil to pass through.
function on_response(request, response)
-- response.status 200
-- response.headers { ["content-type"] = "text/html", ... }
-- response.body string
end
The design is deliberately minimal: two hooks, plain Lua tables, no framework to learn. If you've ever written a line of Lua (or even if you haven't — the syntax takes about five minutes to pick up), you can start scripting your proxy immediately. And if five minutes still sounds like too much effort, just describe what you want to an LLM and paste the output.
Scripting is the feature that turns a proxy from a debugging tool into a development platform. For a Rust project, I wanted a scripting language that could be embedded with zero system dependencies.
Lua checks every box. It's the standard scripting language for networking tools — nginx, HAProxy, nmap, Redis, and WireGuard all use it. The runtime is tiny, fast (script calls take microseconds), and the mlua crate provides safe Rust bindings with vendored compilation. When you cargo install proxelar, Lua 5.4 is compiled from source alongside everything else. No Python installation, no PATH issues, no version conflicts.
Scripting is behind a scripting feature flag (enabled by default), so if you need a minimal build without Lua, --no-default-features gives you exactly the same proxy as before.
Let me walk through a few practical scenarios. Each one is a complete, working script.
The simplest use case: prevent requests from reaching certain hosts.
local blocked = { "ads%.example%.com", "tracker%.analytics%.com" }
function on_request(request)
for _, pattern in ipairs(blocked) do
if string.find(request.url, pattern) then
return {
status = 403,
headers = { ["Content-Type"] = "text/plain" },
body = "Blocked by Proxelar: " .. request.url,
}
end
end
end$ curl -x http://127.0.0.1:8080 http://ads.example.com/banner.js
Blocked by Proxelar: http://ads.example.com/banner.js
$ curl -x http://127.0.0.1:8080 http://example.com/
<!doctype html>... # passes through normally
When on_request returns a table with a status field, Proxelar treats it as a response and sends it back directly — the request never leaves the proxy. Return nil (or nothing) and the request passes through untouched.
During frontend development, you often need a backend endpoint that doesn't exist yet. Instead of setting up a mock server, point your app at the proxy and let the script handle it:
function on_request(request)
if request.method == "GET" and string.find(request.url, "/api/user/me") then
return {
status = 200,
headers = { ["Content-Type"] = "application/json" },
body = '{"id": 1, "name": "Test User", "email": "test@example.com"}',
}
end
if request.method == "POST" and string.find(request.url, "/api/login") then
return {
status = 200,
headers = { ["Content-Type"] = "application/json" },
body = '{"token": "mock-jwt-token-12345", "expires_in": 3600}',
}
end
end$ curl -x http://127.0.0.1:8080 http://api.myapp.com/api/user/me
{"id": 1, "name": "Test User", "email": "test@example.com"}
$ curl -x http://127.0.0.1:8080 -X POST http://api.myapp.com/api/login
{"token": "mock-jwt-token-12345", "expires_in": 3600}
$ curl -x http://127.0.0.1:8080 http://api.myapp.com/api/products
# passes through to the real server
Unmocked endpoints pass through normally, so you can mix real and fake responses in the same session.
Need to test how your app behaves with specific headers? Inject them on every request:
function on_request(request)
request.headers["Authorization"] = "Bearer dev-token-12345"
request.headers["X-Request-ID"] = tostring(os.time())
return request
end$ curl -x http://127.0.0.1:8080 http://httpbin.org/headers
{
"headers": {
"Authorization": "Bearer dev-token-12345",
"X-Request-ID": "1743206400",
"Host": "httpbin.org",
...
}
}
This is particularly useful for testing authenticated APIs without modifying client code or storing credentials in app configuration.
Every frontend developer has hit CORS issues during local development. Instead of configuring the backend, let the proxy fix it:
function on_response(request, response)
response.headers["Access-Control-Allow-Origin"] = "*"
response.headers["Access-Control-Allow-Methods"] = "GET, POST, PUT, DELETE, OPTIONS"
response.headers["Access-Control-Allow-Headers"] = "Content-Type, Authorization"
return response
end$ curl -v -x http://127.0.0.1:8080 http://api.example.com/data
< HTTP/1.1 200 OK
< Content-Type: application/json
< Access-Control-Allow-Origin: *
< Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
< Access-Control-Allow-Headers: Content-Type, Authorization
The CORS headers are injected into every response, regardless of what the upstream server returns. Point your browser at the proxy and the cross-origin errors disappear.
You can modify response bodies too. This script injects a visual indicator into every HTML page so you always know you're browsing through the proxy:
function on_response(request, response)
local ct = response.headers["content-type"] or ""
if not string.find(ct, "text/html") then return end
local banner = '<div style="position:fixed;top:0;left:0;right:0;'
.. 'background:#ff6b35;color:white;text-align:center;'
.. 'padding:4px;z-index:99999;font-size:12px;">'
.. 'Proxied by Proxelar</div>'
response.body = string.gsub(response.body, "<body>", "<body>" .. banner, 1)
return response
end
Every HTML page now shows an orange bar at the top. Non-HTML responses (images, JSON, CSS) pass through untouched because the function returns nil early.
Remove known tracking cookies from your outgoing requests while keeping functional ones intact:
local tracking = { "_ga", "_gid", "fbp", "fr", "datr" }
function on_request(request)
local cookie = request.headers["cookie"]
if not cookie then return end
local kept = {}
for pair in string.gmatch(cookie, "([^;]+)") do
pair = string.match(pair, "^%s*(.-)%s*$")
local name = string.match(pair, "^([^=]+)")
local dominated = false
for _, tc in ipairs(tracking) do
if name == tc then dominated = true; break end
end
if not dominated then table.insert(kept, pair) end
end
if #kept > 0 then
request.headers["cookie"] = table.concat(kept, "; ")
else
request.headers["cookie"] = nil
end
return request
end# Original cookie header:
# Cookie: session=abc123; _ga=GA1.2.123; lang=en; _gid=GA1.2.456
# After script:
# Cookie: session=abc123; lang=enFor quick debugging, print a summary of every request and response to stdout:
function on_request(request)
print(string.format("[REQ] %s %s", request.method, request.url))
end
function on_response(request, response)
local ct = response.headers["content-type"] or "unknown"
local size = #response.body
print(string.format("[RES] %s %s -> %d (%s, %d bytes)",
request.method, request.url, response.status, ct, size))
end$ proxelar -i terminal --script log_traffic.lua
# (in another terminal: curl -x http://127.0.0.1:8080 http://example.com)
[REQ] GET http://example.com/
[RES] GET http://example.com/ -> 200 (text/html; charset=UTF-8, 1256 bytes)
Notice that both hooks return nil (implicitly), so traffic passes through unchanged. The script is purely observational.
The examples above use forward proxy mode, where you configure your client to route through Proxelar. But scripting really shines in reverse proxy mode, where Proxelar sits in front of your service and you control the traffic between your clients and your backend. This is the setup you'd use at work — put the proxy in front of your local API, staging environment, or microservice, and let scripts handle the rest.
Your backend requires a JWT, but during local development you don't want to go through the login flow every time. Put Proxelar in front of your API and let the script handle auth:
proxelar -m reverse --target http://localhost:3000 --script auth_dev.lua -p 4000-- auth_dev.lua
-- Clients hit localhost:4000, Proxelar forwards to localhost:3000 with auth injected
local DEV_USER = '{"sub": "user-42", "role": "admin", "name": "Dev User"}'
function on_request(request)
-- Skip if the client already sent a token
if request.headers["authorization"] then return end
request.headers["authorization"] = "Bearer dev-token"
-- Inject the decoded user context that your middleware expects
request.headers["x-user-context"] = DEV_USER
return request
end# No token needed — the proxy injects it
$ curl http://localhost:4000/api/admin/users
[{"id": 1, "name": "Alice"}, {"id": 2, "name": "Bob"}]
# If you pass your own token, the script leaves it alone
$ curl -H "Authorization: Bearer real-token" http://localhost:4000/api/me
{"id": 7, "name": "You"}
Your frontend, Postman, or any HTTP client can hit localhost:4000 without worrying about tokens. The backend sees a properly authenticated request every time.
Your staging environment fails security audits because the backend doesn't set the right headers yet. Instead of waiting for a backend fix, add them at the proxy layer:
proxelar -m reverse --target http://localhost:3000 --script security_headers.lua -p 4000-- security_headers.lua
function on_response(request, response)
response.headers["Strict-Transport-Security"] = "max-age=31536000; includeSubDomains"
response.headers["X-Content-Type-Options"] = "nosniff"
response.headers["X-Frame-Options"] = "DENY"
response.headers["Content-Security-Policy"] = "default-src 'self'"
response.headers["Referrer-Policy"] = "strict-origin-when-cross-origin"
return response
end$ curl -v http://localhost:4000/
< HTTP/1.1 200 OK
< Strict-Transport-Security: max-age=31536000; includeSubDomains
< X-Content-Type-Options: nosniff
< X-Frame-Options: DENY
< Content-Security-Policy: default-src 'self'
< Referrer-Policy: strict-origin-when-cross-origin
Point the security scanner at localhost:4000 and the headers are there. When the backend team ships the real implementation, remove the proxy and nothing changes.
You want to verify that your frontend handles server errors gracefully — timeouts, 500s, rate limits. Instead of breaking your actual backend, make the proxy return errors for specific endpoints:
proxelar -m reverse --target http://localhost:3000 --script chaos.lua -p 4000-- chaos.lua
-- Simulate failures on specific endpoints to test client error handling
function on_request(request)
-- Simulate a 500 on the payments endpoint
if string.find(request.url, "/api/payments") then
return {
status = 500,
headers = { ["Content-Type"] = "application/json" },
body = '{"error": "Internal Server Error", "message": "database connection timeout"}',
}
end
-- Simulate rate limiting on search
if string.find(request.url, "/api/search") then
return {
status = 429,
headers = {
["Content-Type"] = "application/json",
["Retry-After"] = "30",
},
body = '{"error": "Too Many Requests", "retry_after": 30}',
}
end
end$ curl http://localhost:4000/api/payments
{"error": "Internal Server Error", "message": "database connection timeout"}
$ curl http://localhost:4000/api/search?q=test
{"error": "Too Many Requests", "retry_after": 30}
$ curl http://localhost:4000/api/users
# passes through to the real backend normally
Edit the script, restart the proxy, and you have a different failure scenario. No mocking libraries, no environment variables, no code changes in your application.
Your backend returns a response that's almost right, but you need to tweak a field to unblock frontend work. Instead of modifying the backend or hardcoding values in the frontend, patch it at the proxy:
proxelar -m reverse --target http://localhost:3000 --script patch_api.lua -p 4000-- patch_api.lua
-- Patch specific fields in API responses without touching the backend
function on_response(request, response)
local ct = response.headers["content-type"] or ""
if not string.find(ct, "application/json") then return end
-- The backend doesn't return feature flags yet, but the frontend expects them
if string.find(request.url, "/api/config") then
if string.sub(response.body, 1, 1) == "{" then
response.body = string.gsub(response.body, "}$",
',"feature_flags":{"new_dashboard":true,"dark_mode":true}}')
end
return response
end
-- Override the environment label so the frontend shows "staging"
if string.find(request.url, "/api/health") then
response.body = string.gsub(response.body, '"env":"development"', '"env":"staging"')
return response
end
end$ curl http://localhost:4000/api/config
{"version": "1.2.0", "feature_flags": {"new_dashboard": true, "dark_mode": true}}
$ curl http://localhost:4000/api/health
{"status": "ok", "env": "staging", "uptime": 3600}
The backend returns the real data, and the proxy patches only what you need. When the backend catches up, delete the script.
The scripting engine lives in proxyapi/src/scripting.rs, behind a scripting feature flag. A single Lua VM is created at startup, loaded with the user's script, and shared across all connections via Arc<ScriptEngine>. The VM is protected by a std::sync::Mutex — not a tokio mutex, since Lua calls are synchronous and complete in microseconds.
The hooks are injected directly into the existing CapturingHandler, which already handles body collection and event emission. The request hook runs in handle_request() after the body is collected but before forwarding. The response hook runs in collect_and_emit() before the event is emitted to the UI. This means zero changes to the forward or reverse proxy modules — scripting is entirely transparent to the rest of the proxy.
Script errors are caught, logged, and the request passes through unchanged. A buggy script can never crash the proxy.
Update Proxelar:
cargo install proxelar
Run with a script:
proxelar --script examples/scripts/block_domain.lua
The repository includes 13 example scripts covering header injection, domain blocking, API mocking, CORS fixes, traffic logging, HTML rewriting, cookie stripping, and more. Each one is a standalone file you can use directly or adapt.
Scripting was the most important missing piece, but there's still a long road ahead. Here's what I'm working toward, roughly in priority order:
The full changelog is available on GitHub.