Why AutoGen + MoltPe

AutoGen, Microsoft Research's multi-agent framework, is built around ConversableAgent: any agent can message any other agent, and tool use is modeled as "the LLM proposes a function call, a proxy executes it." That architecture is surprisingly good for payments — the proposer and executor are already separated, which matches the structure of a policy-enforced payment system.

MoltPe slots into AutoGen cleanly because:

For background on the underlying wallet primitive, see AI Agent Wallet Explained and AI Agent Spending Policies.

Prerequisites

Python 3.10+, the AutoGen package, an LLM key, and one or more MoltPe agent tokens (one per AutoGen role that needs to spend). The examples below use pyautogen; the same pattern works for autogen-agentchat in v0.4.

pip install "pyautogen>=0.3" requests

export OPENAI_API_KEY="sk-..."
export MOLTPE_BASE_URL="https://api.moltpe.com"
export MOLTPE_AGENT_TOKEN="mpt_live_..."

In the dashboard, set the agent's policy: per-call cap $0.25, daily cap $5, allowed networks polygon and base, allowed recipients either "any" (for marketplace agents) or an explicit allowlist (for internal-only flows). Fund the wallet with test USDC.

Approach 1: Assistant + UserProxy

The canonical AutoGen pattern is an AssistantAgent (the LLM) paired with a UserProxyAgent (the executor). The Assistant proposes function calls; the Proxy runs them. To wire MoltPe in, define REST-backed functions once and register them on both agents.

"""MoltPe payment functions for AutoGen. Plain Python — register on ConversableAgent."""
import os, uuid, requests
from typing import Annotated

BASE_URL = os.environ["MOLTPE_BASE_URL"]
TOKEN    = os.environ["MOLTPE_AGENT_TOKEN"]
HEADERS  = {"Authorization": f"Bearer {TOKEN}", "Content-Type": "application/json"}


def check_balance() -> str:
    """Return the agent's current USDC balance in USD."""
    r = requests.get(f"{BASE_URL}/v1/wallet/balance", headers=HEADERS, timeout=10)
    r.raise_for_status()
    return f"{r.json()['usdc_balance']} USDC"


def send_payment(
    recipient_wallet: Annotated[str, "0x-prefixed wallet address"],
    amount_usd:       Annotated[float, "USD amount, e.g. 0.25"],
    memo:             Annotated[str,   "Human-readable purpose"] = "",
) -> str:
    """Send USDC from the agent wallet. Subject to MoltPe server-side policy."""
    body = {
        "to": recipient_wallet,
        "amount_usd": amount_usd,
        "memo": memo,
        "client_request_id": str(uuid.uuid4()),  # idempotent retry
    }
    r = requests.post(f"{BASE_URL}/v1/payments", headers=HEADERS, json=body, timeout=30)
    if r.status_code == 403:
        return f"Rejected by policy: {r.json().get('error')}"
    r.raise_for_status()
    data = r.json()
    return f"Paid ${amount_usd}. tx={data['tx_hash']} settled={data['settled_at']}"


def call_x402_endpoint(
    url:              Annotated[str,   "HTTPS URL of the x402 endpoint"],
    max_payment_usd:  Annotated[float, "Hard cap for this call"] = 0.10,
) -> str:
    """Call a paid HTTP endpoint. MoltPe negotiates the 402 and retries."""
    body = {"target_url": url, "method": "GET", "max_payment_usd": max_payment_usd}
    r = requests.post(f"{BASE_URL}/v1/x402/proxy", headers=HEADERS, json=body, timeout=60)
    r.raise_for_status()
    return r.text

Now register the functions. The Assistant sees the schema (so its LLM knows it can pay); the UserProxy actually calls the function when the Assistant proposes it.

"""Register MoltPe functions on Assistant (LLM) and UserProxy (executor)."""
from autogen import AssistantAgent, UserProxyAgent

llm_config = {"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"], "temperature": 0}

assistant = AssistantAgent(
    name="paying_assistant",
    system_message=(
        "You are an agent with a USDC wallet. Use send_payment and "
        "call_x402_endpoint when the task requires spending. Always "
        "check_balance before and after a spend session."
    ),
    llm_config=llm_config,
)

proxy = UserProxyAgent(
    name="executor",
    human_input_mode="NEVER",          # fully autonomous within policy caps
    max_consecutive_auto_reply=10,
    code_execution_config=False,       # no code exec; only registered funcs
)

for fn, desc in [
    (check_balance,       "Get current USDC balance."),
    (send_payment,        "Send USDC to a wallet."),
    (call_x402_endpoint,  "Call a paid HTTP endpoint."),
]:
    assistant.register_for_llm(description=desc)(fn)
    proxy.register_for_execution()(fn)

proxy.initiate_chat(assistant, message="Tip wallet 0x8f2e4bD7...aaa $0.10 for good work.")

The Assistant's LLM sees a registered send_payment tool. It calls it. The Proxy dispatches to the Python function. The function calls MoltPe. MoltPe checks policy and executes. A realistic response payload the Proxy feeds back into the conversation:

{
  "tx_hash": "0xabc123def456...",
  "amount_usd": 0.10,
  "from_agent": "paying_assistant",
  "to": "0x8f2e4bD7...aaa",
  "network": "polygon",
  "settled_at": "2026-04-25T11:03:12Z"
}

Approach 2: GroupChat With Per-Agent Wallets

AutoGen's GroupChat orchestrates N ConversableAgents with a manager selecting who speaks next. It is a natural shape for multi-party commerce: each participant has a role, each role has a wallet, payments flow as the conversation progresses.

The trick is one token per agent. Create a factory that builds a fresh set of registered functions bound to a specific MoltPe agent token, so each AutoGen agent executes as exactly one wallet.

"""GroupChat with one MoltPe wallet per participant."""
import os, requests, uuid
from functools import partial
from autogen import AssistantAgent, UserProxyAgent, GroupChat, GroupChatManager

BASE_URL = os.environ["MOLTPE_BASE_URL"]

def make_send_payment(token: str):
    """Return a send_payment function bound to a specific agent token."""
    def send_payment(recipient_wallet: str, amount_usd: float, memo: str = "") -> str:
        r = requests.post(
            f"{BASE_URL}/v1/payments",
            headers={"Authorization": f"Bearer {token}"},
            json={"to": recipient_wallet, "amount_usd": amount_usd,
                  "memo": memo, "client_request_id": str(uuid.uuid4())},
            timeout=30,
        )
        if r.status_code == 403:
            return f"policy rejection: {r.json().get('error')}"
        r.raise_for_status()
        return f"paid ${amount_usd}, tx={r.json()['tx_hash']}"
    send_payment.__name__ = "send_payment"
    send_payment.__doc__  = "Send USDC from this agent's wallet. Policy-enforced."
    return send_payment

def make_agent(name: str, system: str, moltpe_token: str, llm_config: dict):
    asst = AssistantAgent(name=name, system_message=system, llm_config=llm_config)
    proxy = UserProxyAgent(name=f"{name}_proxy", human_input_mode="NEVER",
                           code_execution_config=False)
    pay = make_send_payment(moltpe_token)
    asst.register_for_llm(description="Pay USDC")(pay)
    proxy.register_for_execution()(pay)
    return asst, proxy

Now build a GroupChat with three agents, each with its own wallet. The manager decides turn order; each agent pays and collects as a normal function call. Because the token lives in a closure, no agent can access another agent's wallet by accident or by prompt injection.

End-to-End Example

A full runnable demo: a freelancer agent, a reviewer agent, and a publisher agent. The freelancer delivers, the publisher pays the freelancer and the reviewer, everyone has their own wallet.

"""Three-agent AutoGen GroupChat with MoltPe wallets per role."""
import os
from autogen import GroupChat, GroupChatManager

llm_config = {"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"], "temperature": 0}

freelancer, fp = make_agent(
    "freelancer",
    "You are a technical writer. Deliver a 3-bullet summary when asked.",
    os.environ["MOLTPE_FREELANCER_TOKEN"], llm_config,
)
reviewer, rp = make_agent(
    "reviewer",
    "You are an editor. Approve or reject deliverables.",
    os.environ["MOLTPE_REVIEWER_TOKEN"], llm_config,
)
publisher, pp = make_agent(
    "publisher",
    "You pay $0.50 to the freelancer and $0.10 to the reviewer on approval. "
    "Freelancer wallet: 0x...aaa. Reviewer wallet: 0x...bbb.",
    os.environ["MOLTPE_PUBLISHER_TOKEN"], llm_config,
)

chat = GroupChat(
    agents=[freelancer, reviewer, publisher, fp, rp, pp],
    messages=[], max_round=12,
)
manager = GroupChatManager(groupchat=chat, llm_config=llm_config)

pp.initiate_chat(
    manager,
    message="Commission a 3-bullet summary of 'x402 adoption in 2026'. "
            "Freelancer writes, reviewer checks, I pay on approval.",
)

Check the MoltPe dashboard after the run: three wallets, two payments, each tagged with the task memo and bounded by the per-agent policies. If the publisher LLM hallucinates a $500 payment, MoltPe's server-side cap rejects it long before any USDC moves.

Common Pitfalls