MCP学习笔记

Posted by iceyao on Monday, April 14, 2025

1. 什么是MCP

MCP是一个开放协议,用于标准化应用程序向大语言模型提供上下文的方式。可以将MCP想象成AI应用程序的 USB-C接口。就像USB-C为设备连接各种外设和配件提供了标准化方式一样,MCP为AI模型连接不同的数据源和工具提供了标准化方式。

2. 为什么要有MCP

MCP提供了连接AI模型与不同数据源和工具的标准方式。具体来说,MCP的存在意义在于:

  • 提供预构建集成 - 为LLM提供一系列可直接使用的集成组件
  • 灵活切换LLM提供商 - 允许在不同的LLM供应商和厂商之间切换
  • 数据安全保障 - 在您的基础设施内保护数据的最佳实践

2. MCP架构

+---------------------------------------------------------------+
|                         Your Computer                         |
|                                                               |
|                    +-------------+         +----------------+ |
|  +----------+      |             |<------->|                | |
|  |          |<---->| MCP Server A|         | Local          | |
|  |          |      |             |         | Data Source A  | |
|  |          |      +-------------+         +----------------+ |
|  |          |                                                 |
|  | Host with|                                                 |
|  | MCP      |      +-------------+         +----------------+ |
|  | Client   |<---->|             |<------->|                | |
|  | (Claude, |      | MCP Server B|         | Local          | |
|  | IDEs,    |      |             |         | Data Source B  | |
|  | Tools)   |      +-------------+         +----------------+ |
|  |          |                                                 |
|  |          |      +-------------+                            |
|  |          |<---->|             |         +----------------+ |
|  +----------+      | MCP Server C|<------->|                | |
|                    |             |  Web    | Remote         | |
|                    +-------------+  APIs   | Service C      | |
|                                            |                | |
+---------------------------------------------------------------+
                                             |                |
                                             +----------------+
                                                  Internet

MCP遵循客户端-服务器架构,其中主机应用程序可以连接到多个服务器:

  • MCP Hosts:如Claude Desktop、IDE或AI工具等需要通过MCP访问数据的程序
  • MCP Clients:维护与服务器1:1连接的协议客户端
  • MCP Servers:通过标准化的模型上下文协议暴露特定功能的轻量级程序
  • Local Data Sources:MCP服务器可以安全访问的计算机文件、数据库和服务
  • Remote Services:MCP服务器可以连接的互联网上的外部系统(如通过API)

2.1 MCP与LLM关系

LLM无MCP和LLM有MCP的区别: without_mcp

with_mcp 对比两个图区别:引入mcp,相当于把模型和工具之间抽象了一层标准出来,有了标准之后就可以快速建立起生态,生态才是真正的"护城河"

2.2 MCP与A2A关系

至于MCP与A2A的区别,A2A(Agent-to-Agent)是一个用于AI代理之间通信的协议框架,MCP是一个用于AI模型与不同数据源和工具之间通信的协议框架。

+------------------------+                      +------------------------+
|                        |                      |                        |
|        Agent           |                      |        Agent           |
|                        |                      |                        |
| +------------------+   |      A2A协议          | +------------------+   |
| |                  |   |  <--------------->   | |                  |   |
| |   Local Agents   |   |                      | |   Local Agents   |   |
| |    [o] [o] [o]   |   |                      | |    [o] [o] [o]   |   |
| |                  |   |                      | |                  |   |
| +------------------+   |                      | +------------------+   |
|                        |                      |                        |
| +------------------+   |      组              | +------------------+   |
| |                  |   |      织              | |                  |   |
| | Vertex AI        |   |      或              | |       LLM        |   |
| | (Gemini API, 3P) |   |      技              | |                  |   |
| |                  |   |      术              | |                  |   |
| +------------------+   |      边              | +------------------+   |
|                        |      界              |                        |
| +------------------+   |                      | +------------------+   |
| |                  |   |                      | |                  |   |
| | Agent Development|   |                      | | Agent Framework  |   |
| | Kit (ADK)        |   |                      | |                  |   |
| |                  |   |                      | |                  |   |
| +------------------+   |                      | +------------------+   |
|          ^             |                      |          ^             |
|          |             |                      |          |             |
|          v             |                      |          v             |
|     +---------+        |                      |     +---------+        |
|     |   MCP   |        |                      |     |   MCP   |        |
|     +---------+        |                      |     +---------+        |
|          ^             |                      |          ^             |
|          |             |                      |          |             |
|          v             |                      |          v             |
| +------------------+   |                      | +------------------+   |
| |                  |   |                      | |                  |   |
| | APIs & Enterprise|   |                      | | APIs & Enterprise|   |
| | Applications     |   |                      | | Applications     |   |
| |                  |   |                      | |                  |   |
| +------------------+   |                      | +------------------+   |
|                        |                      |                        |
+------------------------+                      +------------------------+

2.3 MCP通信方式

MCP Client与MCP Server之间的通信方式:

  • Stdio方式:通过标准输入和输出流进行通信(适合本地程序)
      sequenceDiagram
      participant Client
      participant Server as Server Process
    
      Client->>Server: Launch subprocess
    
      rect rgb(240, 240, 240)
      note over Client,Server: Message Exchange
    
      loop 消息交换
          Client->>Server: Write to stdin
          Server->>Client: Write to stdout
          Server-->>Client: Optional logs on stderr
      end
      end
    
      Client->>Server: Close stdin, terminate subprocess
    
  • Streamable HTTP方式:通过HTTP协议进行通信(适合远程调用),从MCP协议版本2024-11-05开始取代HTTP+SSE方式,SSE成为可选实现。
    ---
    title: HTTP+SSE方式通信的流程图
    ---
    sequenceDiagram
      participant Client
      participant Server
    
      Client->>Server: Open SSE connection
      Server->>Client: endpoint event
    
      rect rgb(240, 240, 240)
      note over Client,Server: [Message Exchange]
    
      loop
          Client->>Server: HTTP POST messages
          Server->>Client: SSE message events
      end
      end
    
      Client->>Server: Close SSE connection
    

MCP采用JSON-RPC 2.0作为其通信消息的标准格式。完整的协议规范可以在MCP官方文档中找到。值得一提的是,MCP的设计理念借鉴了微软开发的Language Server Protocol (LSP)。LSP作为一个成熟的协议范例,自2016年推出以来,成功地实现了IDE和编程语言工具之间的标准化通信,为开发者提供了包括代码自动补全、跳转定义、引用查找等丰富的编程辅助功能。MCP正是希望在AI领域实现类似的标准化目标。

3. MCP开发与使用

3.1 MCP Server开发

以python为例,开发一个获取天气的MCP Server(Tool类型),核心是通过mcp.tool装饰器暴露方法

  1. 使用uv构建python环境
curl -LsSf https://astral.sh/uv/install.sh | sh

# 初始化项目
uv init weather
cd weather

# 创建虚拟环境
uv venv
source .venv/bin/activate

# Remove boilerplate files
rm main.py

# 安装依赖
uv add "mcp[cli]" httpx

uv是当下最好用的包管理工具,基于rust语言实现,官方号称安装包的速度比传统工具快10~100倍

  1. 创建server文件weather.py,通信方式采用stdio方式
from typing import Any
import httpx
from mcp.server.fastmcp import FastMCP

# Initialize FastMCP server
mcp = FastMCP("weather")

# Constants
NWS_API_BASE = "https://api.weather.gov"
USER_AGENT = "weather-app/1.0"

# Define helper function
async def make_nws_request(url: str) -> dict[str, Any] | None:
    """Make a request to the NWS API with proper error handling."""
    headers = {
        "User-Agent": USER_AGENT,
        "Accept": "application/geo+json"
    }
    async with httpx.AsyncClient() as client:
        try:
            response = await client.get(url, headers=headers, timeout=30.0)
            response.raise_for_status()
            return response.json()
        except Exception:
            return None

def format_alert(feature: dict) -> str:
    """Format an alert feature into a readable string."""
    props = feature["properties"]
    return f"""
Event: {props.get('event', 'Unknown')}
Area: {props.get('areaDesc', 'Unknown')}
Severity: {props.get('severity', 'Unknown')}
Description: {props.get('description', 'No description available')}
Instructions: {props.get('instruction', 'No specific instructions provided')}
"""

# 通过mcp.tool装饰器暴露方法
@mcp.tool()
async def get_alerts(
    state: str = Field(description="Two-letter US state code (e.g. CA, NY)"),
) -> str:
    """Get weather alerts for a US state.

    Args:
        state: Two-letter US state code (e.g. CA, NY)
    """
    url = f"{NWS_API_BASE}/alerts/active/area/{state}"
    data = await make_nws_request(url)

    if not data or "features" not in data:
        return "Unable to fetch alerts or no alerts found."

    if not data["features"]:
        return "No active alerts for this state."

    alerts = [format_alert(feature) for feature in data["features"]]
    return "\n---\n".join(alerts)

@mcp.tool()
async def get_forecast(
    latitude: float = Field(description="Latitude of the location"),
    longitude: float = Field(description="Longitude of the location"),
) -> str:
    """Get weather forecast for a location.

    Args:
        latitude: Latitude of the location
        longitude: Longitude of the location
    """
    # First get the forecast grid endpoint
    points_url = f"{NWS_API_BASE}/points/{latitude},{longitude}"
    points_data = await make_nws_request(points_url)

    if not points_data:
        return "Unable to fetch forecast data for this location."

    # Get the forecast URL from the points response
    forecast_url = points_data["properties"]["forecast"]
    forecast_data = await make_nws_request(forecast_url)

    if not forecast_data:
        return "Unable to fetch detailed forecast."

    # Format the periods into a readable forecast
    periods = forecast_data["properties"]["periods"]
    forecasts = []
    for period in periods[:5]:  # Only show next 5 periods
        forecast = f"""
{period['name']}:
Temperature: {period['temperature']}°{period['temperatureUnit']}
Wind: {period['windSpeed']} {period['windDirection']}
Forecast: {period['detailedForecast']}
"""
        forecasts.append(forecast)

    return "\n---\n".join(forecasts)

if __name__ == "__main__":
    # 以stdio方式运行mcp server
    mcp.run(transport='stdio')

使用MCP python-sdk来开发 MCP Server的时候会涉及到使用FastMCP类或Server类,mcp.server.lowlevel中的 Server类与FastMCP类之间的区别:

  • Server是MCP的低级别实现,提供对协议的完全控制,但需要更多的手动配置
  • FastMCP是构建在Server之上的高级接口,提供了更简洁的API和额外的功能
  • 大多数开发者应该使用FastMCP,除非需要对MCP协议有更精细的控制
  • FastMCP内部使用Server类来处理核心协议功能
  1. 验证测试,这里使用vscode+cline插件当作mcp client,编辑cline MCP Servers -> Configure MCP Servers/cline_mcp_settings.json,替换为绝对路径
{
    "mcpServers": {
        "weather": {
            "command": "uv",
            "args": [
                "--directory",
                "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather",
                "run",
                "weather.py"
            ]
        }
    }
}

示例:在Cline聊天对话窗口中输入获取纽约天气的指令,就会触发get_forecast方法

3.2 MCP Client开发

以python为例,实现一个上面获取天气的MCP Server交互的MCP Client

  1. uv构建python环境
uv init mcp-client
cd mcp-client

uv venv
source .venv/bin/activate

# Install required packages
uv add mcp anthropic python-dotenv

# Remove boilerplate files
rm main.py

# Create our main file
touch client.py

# Add your key to the .env file
cat << EOF > .env
OPENAI_API_KEY="sk-xxx" # 替换为真正的API key
OPENAI_BASE_URL="https://api.deepseek.com"
EOF
  1. 实现client文件client.py,官方原有例子是调用claude模型,这里利用AI编程工具改造成使用openai sdk调用deepseek模型;OpenAI SDK原生不支持MCP,但是OpenAI Agents SDK支持MCP
import asyncio
import json
from typing import Optional
from contextlib import AsyncExitStack

from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client

from openai import OpenAI
from dotenv import load_dotenv

load_dotenv()  # load environment variables from .env

class MCPClient:
    def __init__(self):
        # Initialize session and client objects
        self.session: Optional[ClientSession] = None
        self.exit_stack = AsyncExitStack()
        self.openai = OpenAI()

    async def connect_to_server(self, server_script_path: str):
        """Connect to an MCP server

        Args:
            server_script_path: Path to the server script (.py or .js)
        """
        is_python = server_script_path.endswith('.py')
        is_js = server_script_path.endswith('.js')
        if not (is_python or is_js):
            raise ValueError("Server script must be a .py or .js file")

        command = "python" if is_python else "node"
        server_params = StdioServerParameters(
            command=command,
            args=[server_script_path],
            env=None
        )

        stdio_transport = await self.exit_stack.enter_async_context(stdio_client(server_params))
        self.stdio, self.write = stdio_transport
        self.session = await self.exit_stack.enter_async_context(ClientSession(self.stdio, self.write))

        await self.session.initialize()

        # List available tools
        response = await self.session.list_tools()
        tools = response.tools
        print("\n已连接到服务器,可用工具:", [tool.name for tool in tools])    

    async def process_query(self, query: str) -> str:
        """Process a query using OpenAI and available tools"""
        messages = [
            {
                "role": "user",
                "content": query
            }
        ]

        response = await self.session.list_tools()
        available_tools = [{
            "type": "function",
            "function": {
                "name": tool.name,
                "description": tool.description,
                "parameters": tool.inputSchema
            }
        } for tool in response.tools]

        # Initial OpenAI API call
        response = self.openai.chat.completions.create(
            model="deepseek-chat",
            max_tokens=1000,
            messages=messages,
            tools=available_tools
        )

        # Process response and handle tool calls
        final_text = []

        assistant_message = response.choices[0].message
        assistant_message_content = assistant_message.content or ""
        
        if assistant_message_content:
            final_text.append(assistant_message_content)
        
        # Handle tool calls if present
        if assistant_message.tool_calls:
            for tool_call in assistant_message.tool_calls:
                tool_name = tool_call.function.name
                tool_args = tool_call.function.arguments
                
                # 将 JSON 字符串转换为字典对象
                if isinstance(tool_args, str):
                    try:
                        tool_args_dict = json.loads(tool_args)
                    except json.JSONDecodeError:
                        tool_args_dict = {"text": tool_args}
                else:
                    tool_args_dict = tool_args
                
                # Execute tool call
                result = await self.session.call_tool(tool_name, tool_args_dict)
                final_text.append(f"[调用工具 {tool_name},参数 {tool_args}]")
                
                # Add assistant message and tool result to messages
                messages.append({
                    "role": "assistant",
                    "content": None,  # 设置为 None,因为我们使用 tool_calls
                    "tool_calls": [
                        {
                            "id": tool_call.id,
                            "type": "function",
                            "function": {
                                "name": tool_name,
                                "arguments": tool_args
                            }
                        }
                    ]
                })
                
                # 工具结果消息必须使用字符串内容
                messages.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": str(result.content)  # 确保内容是字符串
                })
                
                # Get next response from OpenAI
                response = self.openai.chat.completions.create(
                    model="deepseek-chat",
                    max_tokens=1000,
                    messages=messages,
                    tools=available_tools
                )
                
                next_content = response.choices[0].message.content or ""
                if next_content:
                    final_text.append(next_content)

        return "\n".join(final_text)

    async def chat_loop(self):
        """Run an interactive chat loop"""
        print("\nMCP 客户端已启动!")
        print("输入您的问题或输入 'quit' 退出。")

        while True:
            try:
                query = input("\n问题: ").strip()

                if query.lower() == 'quit':
                    break

                response = await self.process_query(query)
                print("\n" + response)

            except Exception as e:
                print(f"\n错误: {str(e)}")
                import traceback
                traceback.print_exc()  # 打印完整的错误堆栈,帮助调试

    async def cleanup(self):
        """Clean up resources"""
        await self.exit_stack.aclose()

async def main():
    if len(sys.argv) < 2:
        print("用法: python client.py <path_to_server_script>")
        sys.exit(1)

    client = MCPClient()
    try:
        await client.connect_to_server(sys.argv[1])
        await client.chat_loop()
    finally:
        await client.cleanup()

if __name__ == "__main__":
    import sys
    asyncio.run(main())
  1. 终端使用uv运行mcp client和mcp server
(mcp-client) iceyao@macbookair mcp-client % uv run client.py ../weather/weather.py
Processing request of type ListToolsRequest

已连接到服务器,可用工具: ['get_alerts', 'get_forecast']

MCP 客户端已启动!
输入您的问题或输入 'quit' 退出。

问题: 获取芝加哥的天气
Processing request of type ListToolsRequest
Processing request of type CallToolRequest
HTTP Request: GET https://api.weather.gov/points/41.8781,-87.6298 "HTTP/1.1 200 OK"
HTTP Request: GET https://api.weather.gov/gridpoints/LOT/76,73/forecast "HTTP/1.1 200 OK"

[调用工具 get_forecast,参数 {"latitude":41.8781,"longitude":-87.6298}]
芝加哥的天气预报如下:

- **今晚**:  
  温度:42°F  
  风速:5 mph 东北风  
  天气:大部分晴朗,最低温度约42°F。

- **周日**:  
  温度:55°F  
  风速:5-10 mph 东南风  
  天气:大部分晴朗,最高温度约55°F,阵风可达20 mph。

- **周日晚上**:  
  温度:52°F  
  风速:10 mph 东南风  
  天气:大部分晴朗,最低温度约52°F,阵风可达20 mph。

- **周一**:  
  温度:79°F  
  风速:10-20 mph 南风  
  天气:大部分晴朗,最高温度约79°F,阵风可达35 mph。

- **周一晚上**:  
  温度:65°F  
  风速:20-30 mph 西南风  
  天气:多云,有60%的降水概率,可能有阵雨和雷暴,最低温度约65°F,阵风可达40 mph。

请根据天气情况合理安排出行!

3.3 调试MCP

官方提供几个调试工具用于不同级别的debug:

  1. MCP Inspector(个人推荐)
  2. Claude Desktop开发者工具
  3. Server端打印日志

如何使用MCP Inspector调试MCP? MCP Inspector是一款用于测试和调试MCP Server的交互式开发者工具,可以通过npx启动。实际上是一个通用版的web MCP Client,用于调试各种不同的MCP Server,包括本地的MCP Server,以及远程的MCP Server。

# 语法格式
npx @modelcontextprotocol/inspector \
  uv \
  --directory path/to/server \
  run \
  package-name \
  args...
(mcp-client) (base) iceyao@macbookair Desktop % npx @modelcontextprotocol/inspector \
  uv \
  --directory \
  /Users/iceyao/Desktop/weather \
  run \
  weather.py
Starting MCP inspector...
⚙️ Proxy server listening on port 6277
New SSE connection
Query parameters: [Object: null prototype] {
  transportType: 'stdio',
  command: 'uv',
  args: '--directory /Users/iceyao/Desktop/weather run weather.py',
  env: '{"HOME":"/Users/iceyao","LOGNAME":"iceyao","PATH":"/Users/iceyao/.npm/_npx/5a9d879542beca3a/node_modules/.bin:/Users/iceyao/Desktop/node_modules/.bin:/Users/iceyao/node_modules/.bin:/Users/node_modules/.bin:/node_modules/.bin:/opt/homebrew/lib/node_modules/npm/node_modules/@npmcli/run-script/lib/node-gyp-bin:/Users/iceyao/Desktop/mcp-client/.venv/bin:/Users/iceyao/.local/bin:/usr/local/go/bin/:/Users/iceyao/Documents//bin:/opt/anaconda3/bin:/opt/anaconda3/condabin:/opt/homebrew/bin:/opt/homebrew/sbin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Applications/iTerm.app/Contents/Resources/utilities:/Users/iceyao/.orbstack/bin","SHELL":"/bin/zsh","TERM":"xterm-256color","USER":"iceyao"}'
}
Stdio transport: command=/Users/iceyao/.local/bin/uv, args=--directory,/Users/iceyao/Desktop/weather,run,weather.py
Spawned stdio transport
Connected MCP client to backing server transport
Created web app transport
Created web app transport
Set up MCP proxy
🔍 MCP Inspector is up and running at http://127.0.0.1:6274 🚀

浏览器访问http://127.0.0.1:6274

3.4 MCP其它概念

上述MCP Server/Client的示例和调试过程主要聚焦于Tools功能的实现和使用。虽然目前大多数MCP的实现都是围绕Tools展开的,但MCP协议实际上还包含其他重要概念

3.4.1 资源Resources

Resources:将服务器中的数据和内容提供给大语言模型(LLMs),它是模型上下文协议(MCP)中的核心基本要素,它使服务器能够提供可被客户端读取的数据和内容,并用作与大语言模型交互的上下文。

到底哪些内容可以被视为Resources?诸如文件内容、数据库记录、API响应、实时系统数据、截图和图像、日志文件等,都可以被视为Resources。可以归类为两大类型内容:

  • 文本资源:源代码、配置文件、日志文件、JSON/XML数据、文本
  • 二进制资源:图片、PDF、音频文件、视频文件、其它非文本格式

示例:

mcp = FastMCP("weather")

@mcp.resource("file:///tmp/app.log")
async def read_resource() -> str:
    log_contents = await read_log_file()
    return log_contents

@mcp.resource("echo://{text}")
async def echo_template(text: str) -> str:
    """Echo the input text"""
    return f"Echo: {text}"

通过mcp sdk的ClientSessionread_resource方法可以读取资源

3.4.2 提示词Prompts

Prompts:创建可复用的提示模板和工作流程,客户端可以轻松地将其呈现给用户和大语言模型(LLMs)。它们提供了一种强大的方式来规范和共享常见的大语言模型交互;Prompts旨在由用户控制。

MCP中的Prompts是预定义的模板,这些模板能够:

  • 接受动态参数
  • 包含来自资源的上下文
  • 串联多个交互
  • 引导特定workflow
  • 显示为用户界面元素(比如斜杠命令)
mcp = FastMCP("weather")

@mcp.prompt("echo")
async def echo_prompt(text: str) -> str:
    return text

通过mcp sdk的ClientSessionget_prompt方法可以读取prompt

3.4.3 工具Tools

Tools是模型上下文协议(MCP)中一种强大的基本元素,它使服务器能够向客户端公开可执行功能。通过工具,大语言模型可以与外部系统交互、进行计算,并在现实世界中采取行动。

Tools的实现例子如上MCP Server、Client的示例,目前大多数的MCP Client只支持Tools的调用,比如cursor、winsurf等

3.4.4 采样Sampling

Sampling是MCP的一项强大功能,它允许服务器通过客户端向大语言模型请求完成结果,在维护安全和隐私的同时实现复杂的智能行为。(示例待补充)

3.4.5 Roots

Roots是MCP中的一个概念,用于定义服务器可以运行的边界。它们为客户端向服务器告知相关资源及其位置提供了一种方式。(示例待补充)

4. MCP未来展望

从MCP官网路线图获悉大致未来6个月的方向:

4.1 Validation

  • 计划投资于参考客户端实现,用于展示协议功能的高质量AI应用
  • 开发合规测试套件,用于自动验证客户端、服务器和SDK是否正确实现规范
  • 这些工具将帮助开发者自信地实现MCP,同时确保整个生态系统中的行为一致性

4.2 Registry

  • 计划开发MCP Registry,实现集中式服务器发现和元数据管理
  • 该Registry将主要作为API层,供第三方市场和发现服务构建使用
  • 目标是简化MCP服务器的分发和发现方式

4.3 Agents(智能体)

  • 探索改进如Agent Graphs(智能体图),通过命名空间和图感知通信模式实现复杂的智能体拓扑结构
  • 改进交互式工作流,通过精细权限管理、标准化交互模式和与终端用户直接通信的方式,提升人机协作体验
  • 随着MCP越来越多地成为智能体工作流的一部分,这些改进将变得更加重要

4.4 Multimodality(多模态)

  • 支持AI能力的完整范围,包括视频和其他媒体类型
  • 实现流式处理,包括多部分、分块消息和双向通信,以提供交互式体验

4.5 Governance(治理)

  • 实施以社区为主导的开发,培养协作生态系统,确保MCP服务于多样化的应用和用例
  • 建立透明的标准化流程,用于规范贡献,同时探索通过行业机构进行正式标准化

参考链接

「真诚赞赏,手留余香」

爱折腾的工程师

真诚赞赏,手留余香

使用微信扫描二维码完成支付