AutoGen + MoltPe Payment Integration: Complete Guide (2026)
Why AutoGen + MoltPe
AutoGen, Microsoft Research's multi-agent framework, is built around ConversableAgent: any agent can message any other agent, and tool use is modeled as "the LLM proposes a function call, a proxy executes it." That architecture is surprisingly good for payments — the proposer and executor are already separated, which matches the structure of a policy-enforced payment system.
MoltPe slots into AutoGen cleanly because:
- Function registration is the native extension point. There is no custom tool class hierarchy to learn. A plain Python function with a good docstring and type hints becomes an AutoGen tool. MoltPe's REST API wraps in about ten lines per function.
- Proposer/executor split matches policy enforcement. The Assistant proposes a payment; the UserProxy executes via a MoltPe REST call; MoltPe enforces the policy on its server. Three layers of defense with almost no extra code.
- GroupChat maps to multi-party commerce. AutoGen's GroupChat manager already orchestrates turn-taking among N agents. Give each agent its own MoltPe wallet and suddenly the group is a small internal market.
For background on the underlying wallet primitive, see AI Agent Wallet Explained and AI Agent Spending Policies.
Prerequisites
Python 3.10+, the AutoGen package, an LLM key, and one or more MoltPe agent tokens (one per AutoGen role that needs to spend). The examples below use pyautogen; the same pattern works for autogen-agentchat in v0.4.
pip install "pyautogen>=0.3" requests
export OPENAI_API_KEY="sk-..."
export MOLTPE_BASE_URL="https://api.moltpe.com"
export MOLTPE_AGENT_TOKEN="mpt_live_..."
In the dashboard, set the agent's policy: per-call cap $0.25, daily cap $5, allowed networks polygon and base, allowed recipients either "any" (for marketplace agents) or an explicit allowlist (for internal-only flows). Fund the wallet with test USDC.
Approach 1: Assistant + UserProxy
The canonical AutoGen pattern is an AssistantAgent (the LLM) paired with a UserProxyAgent (the executor). The Assistant proposes function calls; the Proxy runs them. To wire MoltPe in, define REST-backed functions once and register them on both agents.
"""MoltPe payment functions for AutoGen. Plain Python — register on ConversableAgent."""
import os, uuid, requests
from typing import Annotated
BASE_URL = os.environ["MOLTPE_BASE_URL"]
TOKEN = os.environ["MOLTPE_AGENT_TOKEN"]
HEADERS = {"Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json"}
def check_balance() -> str:
"""Return the agent's current USDC balance in USD."""
r = requests.get(f"{BASE_URL}/v1/wallet/balance", headers=HEADERS, timeout=10)
r.raise_for_status()
return f"{r.json()['usdc_balance']} USDC"
def send_payment(
recipient_wallet: Annotated[str, "0x-prefixed wallet address"],
amount_usd: Annotated[float, "USD amount, e.g. 0.25"],
memo: Annotated[str, "Human-readable purpose"] = "",
) -> str:
"""Send USDC from the agent wallet. Subject to MoltPe server-side policy."""
body = {
"to": recipient_wallet,
"amount_usd": amount_usd,
"memo": memo,
"client_request_id": str(uuid.uuid4()), # idempotent retry
}
r = requests.post(f"{BASE_URL}/v1/payments", headers=HEADERS, json=body, timeout=30)
if r.status_code == 403:
return f"Rejected by policy: {r.json().get('error')}"
r.raise_for_status()
data = r.json()
return f"Paid ${amount_usd}. tx={data['tx_hash']} settled={data['settled_at']}"
def call_x402_endpoint(
url: Annotated[str, "HTTPS URL of the x402 endpoint"],
max_payment_usd: Annotated[float, "Hard cap for this call"] = 0.10,
) -> str:
"""Call a paid HTTP endpoint. MoltPe negotiates the 402 and retries."""
body = {"target_url": url, "method": "GET", "max_payment_usd": max_payment_usd}
r = requests.post(f"{BASE_URL}/v1/x402/proxy", headers=HEADERS, json=body, timeout=60)
r.raise_for_status()
return r.text
Now register the functions. The Assistant sees the schema (so its LLM knows it can pay); the UserProxy actually calls the function when the Assistant proposes it.
"""Register MoltPe functions on Assistant (LLM) and UserProxy (executor)."""
from autogen import AssistantAgent, UserProxyAgent
llm_config = {"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"], "temperature": 0}
assistant = AssistantAgent(
name="paying_assistant",
system_message=(
"You are an agent with a USDC wallet. Use send_payment and "
"call_x402_endpoint when the task requires spending. Always "
"check_balance before and after a spend session."
),
llm_config=llm_config,
)
proxy = UserProxyAgent(
name="executor",
human_input_mode="NEVER", # fully autonomous within policy caps
max_consecutive_auto_reply=10,
code_execution_config=False, # no code exec; only registered funcs
)
for fn, desc in [
(check_balance, "Get current USDC balance."),
(send_payment, "Send USDC to a wallet."),
(call_x402_endpoint, "Call a paid HTTP endpoint."),
]:
assistant.register_for_llm(description=desc)(fn)
proxy.register_for_execution()(fn)
proxy.initiate_chat(assistant, message="Tip wallet 0x8f2e4bD7...aaa $0.10 for good work.")
The Assistant's LLM sees a registered send_payment tool. It calls it. The Proxy dispatches to the Python function. The function calls MoltPe. MoltPe checks policy and executes. A realistic response payload the Proxy feeds back into the conversation:
{
"tx_hash": "0xabc123def456...",
"amount_usd": 0.10,
"from_agent": "paying_assistant",
"to": "0x8f2e4bD7...aaa",
"network": "polygon",
"settled_at": "2026-04-25T11:03:12Z"
}
Approach 2: GroupChat With Per-Agent Wallets
AutoGen's GroupChat orchestrates N ConversableAgents with a manager selecting who speaks next. It is a natural shape for multi-party commerce: each participant has a role, each role has a wallet, payments flow as the conversation progresses.
The trick is one token per agent. Create a factory that builds a fresh set of registered functions bound to a specific MoltPe agent token, so each AutoGen agent executes as exactly one wallet.
"""GroupChat with one MoltPe wallet per participant."""
import os, requests, uuid
from functools import partial
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager
BASE_URL = os.environ["MOLTPE_BASE_URL"]
def make_send_payment(token: str):
"""Return a send_payment function bound to a specific agent token."""
def send_payment(recipient_wallet: str, amount_usd: float, memo: str = "") -> str:
r = requests.post(
f"{BASE_URL}/v1/payments",
headers={"Authorization": f"Bearer {token}"},
json={"to": recipient_wallet, "amount_usd": amount_usd,
"memo": memo, "client_request_id": str(uuid.uuid4())},
timeout=30,
)
if r.status_code == 403:
return f"policy rejection: {r.json().get('error')}"
r.raise_for_status()
return f"paid ${amount_usd}, tx={r.json()['tx_hash']}"
send_payment.__name__ = "send_payment"
send_payment.__doc__ = "Send USDC from this agent's wallet. Policy-enforced."
return send_payment
def make_agent(name: str, system: str, moltpe_token: str, llm_config: dict):
asst = AssistantAgent(name=name, system_message=system, llm_config=llm_config)
proxy = UserProxyAgent(name=f"{name}_proxy", human_input_mode="NEVER",
code_execution_config=False)
pay = make_send_payment(moltpe_token)
asst.register_for_llm(description="Pay USDC")(pay)
proxy.register_for_execution()(pay)
return asst, proxy
Now build a GroupChat with three agents, each with its own wallet. The manager decides turn order; each agent pays and collects as a normal function call. Because the token lives in a closure, no agent can access another agent's wallet by accident or by prompt injection.
End-to-End Example
A full runnable demo: a freelancer agent, a reviewer agent, and a publisher agent. The freelancer delivers, the publisher pays the freelancer and the reviewer, everyone has their own wallet.
"""Three-agent AutoGen GroupChat with MoltPe wallets per role."""
import os
from autogen import GroupChat, GroupChatManager
llm_config = {"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"], "temperature": 0}
freelancer, fp = make_agent(
"freelancer",
"You are a technical writer. Deliver a 3-bullet summary when asked.",
os.environ["MOLTPE_FREELANCER_TOKEN"], llm_config,
)
reviewer, rp = make_agent(
"reviewer",
"You are an editor. Approve or reject deliverables.",
os.environ["MOLTPE_REVIEWER_TOKEN"], llm_config,
)
publisher, pp = make_agent(
"publisher",
"You pay $0.50 to the freelancer and $0.10 to the reviewer on approval. "
"Freelancer wallet: 0x...aaa. Reviewer wallet: 0x...bbb.",
os.environ["MOLTPE_PUBLISHER_TOKEN"], llm_config,
)
chat = GroupChat(
agents=[freelancer, reviewer, publisher, fp, rp, pp],
messages=[], max_round=12,
)
manager = GroupChatManager(groupchat=chat, llm_config=llm_config)
pp.initiate_chat(
manager,
message="Commission a 3-bullet summary of 'x402 adoption in 2026'. "
"Freelancer writes, reviewer checks, I pay on approval.",
)
Check the MoltPe dashboard after the run: three wallets, two payments, each tagged with the task memo and bounded by the per-agent policies. If the publisher LLM hallucinates a $500 payment, MoltPe's server-side cap rejects it long before any USDC moves.
Common Pitfalls
- One token for the whole GroupChat. Defeats isolation. Each AutoGen agent should have its own MoltPe token bound via closure, so compromise in one agent cannot touch the others.
- Forgetting
register_for_execution. If you only register for the LLM, the Assistant will propose a function call and the Proxy will not know how to run it. Both registrations are required. - Skipping
client_request_id. AutoGen retries freely on transient errors. Passing a stable UUID makes MoltPe payments idempotent, preventing double-spend on retry. - Running with
human_input_mode=ALWAYSin production. Fine for debugging, but it blocks autonomous flows. UseNEVERwith tight MoltPe policy caps, orTERMINATEfor exit points only.
Related Reading
Frequently Asked Questions
Does this work with AutoGen v0.4 (the Core/Agentchat split)?
Yes. The REST-backed functions in this guide are pure Python, so they port to both the classic pyautogen ConversableAgent API and the newer autogen-agentchat AssistantAgent. Registration APIs differ slightly between versions, but the payment logic does not.
How do I prevent the Assistant from paying arbitrary amounts?
Defense in depth. First, MoltPe's server-side policy is the hard ceiling — a per-call and daily USD cap the agent cannot override. Second, validate amount_usd in the registered Python function before calling the MoltPe API. The LLM proposes, the proxy executes, MoltPe enforces.
Can I use MoltPe with a GroupChat?
Yes. Each participant in a GroupChat gets its own MoltPe agent and its own set of registered payment functions bound to its token. The group manager turn-taking does not need to know about wallets — agents pay and collect on their turns as a normal function call.
Does MoltPe support tool_choice=required on AutoGen?
Yes. MoltPe tools are exposed as standard OpenAI-style function schemas via AutoGen's registration helpers, so any llm_config flag that works for other tools (tool_choice, parallel_tool_calls) works for MoltPe functions too.
What happens if the MoltPe REST call times out?
The registered function should catch the timeout and return a structured error string. AutoGen will feed that back to the LLM, which can retry or abort. Payments in MoltPe are idempotent when you pass a client_request_id, so safe-retries do not double-spend.
Wire payments into your AutoGen crew
Give each AutoGen role an isolated wallet, register three Python functions, and let your agents transact autonomously under policy. Free account, test USDC, full ledger.
Get Started Free →About MoltPe
MoltPe is AI-native payment infrastructure that gives AI agents isolated wallets with programmable spending policies for autonomous USDC transactions. Live on Polygon PoS, Base, and Tempo. Supports MCP, x402, and REST API, with first-class integrations for LangChain, CrewAI, AutoGen, and Claude Code.