面部识别相关

用python的mediapipe工具识别人脸,控制unity中3d角色的表情 以下是使用Python的MediaPipe和Unity实现面部表情控制的逐步指南:

步骤1:安装所需库

pip install mediapipe opencv-python

步骤2:Python面部识别脚本

import cv2
import mediapipe as mp
import socket
import json

# MediaPipe初始化
mp_face_mesh = mp.solutions.face_mesh
face_mesh = mp_face_mesh.FaceMesh(
    static_image_mode=False,
    max_num_faces=1,
    min_detection_confidence=0.5,
    min_tracking_confidence=0.5)

# 关键点索引(根据官方文档调整)
MOUTH_TOP = 13    # 上唇中心
MOUTH_BOTTOM = 14 # 下唇中心
LEFT_EYE_TOP = 159
LEFT_EYE_BOTTOM = 145

# Socket服务器设置
server_socket = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_socket.bind(('127.0.0.1', 12345))
server_socket.listen(1)
print("等待Unity连接...")
conn, addr = server_socket.accept()
print("已连接:", addr)

cap = cv2.VideoCapture(0)

while cap.isOpened():
    success, frame = cap.read()
    if not success:
        break

    # 转换为RGB并处理
    image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
    results = face_mesh.process(image)

    if results.multi_face_landmarks:
        landmarks = results.multi_face_landmarks[0].landmark

        # 计算嘴巴开合程度
        mouth_top = landmarks[MOUTH_TOP]
        mouth_bottom = landmarks[MOUTH_BOTTOM]
        mouth_open = mouth_bottom.y - mouth_top.y

        # 计算左眼闭合程度
        eye_top = landmarks[LEFT_EYE_TOP]
        eye_bottom = landmarks[LEFT_EYE_BOTTOM]
        eye_close = eye_bottom.y - eye_top.y

        # 构造数据包
        data = {
            "mouth_open": float(mouth_open * 100),  # 放大数值便于观察
            "eye_close": float(eye_close * 100)
        }

        # 发送JSON数据
        conn.send(json.dumps(data).encode() + b'n')

    # 显示画面(可选)
    cv2.imshow('MediaPipe FaceMesh', cv2.flip(frame, 1))
    if cv2.waitKey(5) & 0xFF == 27:
        break

# 清理资源
cap.release()
cv2.destroyAllWindows()
conn.close()
server_socket.close()

步骤3:Unity设置

  1. 创建3D角色并设置Blend Shapes

    • 确保角色有对应的Blend Shapes(如”MouthOpen”, “EyeClose”)
  2. 创建C#脚本FaceController.cs

    using UnityEngine;
    using System.Net.Sockets;
    using System.Text;
    using System.Threading;
    using System.Collections.Concurrent;
    public class FaceController : MonoBehaviour
    {
    public SkinnedMeshRenderer faceMesh;
    public string mouthBlendShapeName = "MouthOpen";
    public string eyeBlendShapeName = "EyeClose";
    
    private TcpClient client;
    private NetworkStream stream;
    private Thread receiveThread;
    private bool isRunning;
    
    private ConcurrentQueue<string> dataQueue = new ConcurrentQueue<string>();
    private float mouthValue;
    private float eyeValue;
    
    void Start()
    {
        ConnectToServer();
    }
    
    void ConnectToServer()
    {
        try
        {
            client = new TcpClient("127.0.0.1", 12345);
            stream = client.GetStream();
            isRunning = true;
    
            receiveThread = new Thread(() =>
            {
                byte[] buffer = new byte[1024];
                while (isRunning)
                {
                    try
                    {
                        int bytesRead = stream.Read(buffer, 0, buffer.Length);
                        if (bytesRead > 0)
                        {
                            string data = Encoding.UTF8.GetString(buffer, 0, bytesRead);
                            dataQueue.Enqueue(data);
                        }
                    }
                    catch { break; }
                }
            });
    
            receiveThread.Start();
        }
        catch (System.Exception e)
        {
            Debug.LogError("Connection error: " + e.Message);
        }
    }
    
    void Update()
    {
        // 处理所有排队的数据
        while (dataQueue.TryDequeue(out string rawData))
        {
            foreach (var json in rawData.Split('n'))
            {
                if (string.IsNullOrEmpty(json)) continue;
    
                try
                {
                    var data = JsonUtility.FromJson<FaceData>(json);
                    mouthValue = data.mouth_open;
                    eyeValue = data.eye_close;
                }
                catch (System.Exception e)
                {
                    Debug.LogWarning("Parse error: " + e.Message);
                }
            }
        }
    
        // 应用Blend Shapes
        if (faceMesh != null)
        {
            int mouthIndex = faceMesh.sharedMesh.GetBlendShapeIndex(mouthBlendShapeName);
            int eyeIndex = faceMesh.sharedMesh.GetBlendShapeIndex(eyeBlendShapeName);
    
            if (mouthIndex != -1) 
                faceMesh.SetBlendShapeWeight(mouthIndex, mouthValue);
            if (eyeIndex != -1)
                faceMesh.SetBlendShapeWeight(eyeIndex, eyeValue);
        }
    }
    
    void OnDestroy()
    {
        isRunning = false;
        stream?.Close();
        client?.Close();
    }
    
    [System.Serializable]
    class FaceData
    {
        public float mouth_open;
        public float eye_close;
    }
    }

步骤4:Unity配置

  1. 将脚本附加到角色对象
  2. 在Inspector中:
    • 将角色的SkinnedMeshRenderer拖入faceMesh字段
    • 输入正确的Blend Shape名称

步骤5:运行流程

  1. 先运行Python脚本
  2. 等待Unity连接提示后运行Unity项目
  3. 对摄像头做出表情观察角色变化

增强功能建议

  1. 添加更多表情参数:
    
    # 添加眉毛控制
    EYEBROW_LEFT = 105
    EYEBROW_RIGHT = 334

在数据中添加眉毛位置

eyebrow_left = landmarks[EYEBROW_LEFT].y data[“eyebrow_left”] = float(eyebrow_left * 100)


2. 在Unity中添加对应的Blend Shape控制

3. 添加校准功能(在Python中添加空格键校准):
```python
calibration = {}

if cv2.waitKey(5) & 0xFF == 32:  # 空格键
    calibration["mouth_neutral"] = mouth_open
    calibration["eye_neutral"] = eye_close
  1. 使用归一化计算:
    mouth_open = (current_value - calibration["mouth_neutral"]) * sensitivity

故障排除

  1. 确保防火墙允许本地连接
  2. 检查Blend Shape名称匹配
  3. 确认摄像头权限已开启
  4. 调整Python脚本中的关键点索引
  5. 使用Debug.Log显示接收数值验证数据流

这个方案实现了实时面部捕捉到虚拟角色驱动的完整流程,可根据需要扩展更多面部特征控制。

此条目发表在开发技巧记录分类目录。将固定链接加入收藏夹。