Python OpenCV Motion Detection Made Simple
Motion detection is a fundamental aspect of computer vision applications such as surveillance, security systems, and automated monitoring. Using OpenCV, we can implement a simple motion detection system that identifies changes in a video stream.
Step 1: Install OpenCV
Ensure you have OpenCV installed before proceeding. If not, install it using:
pip install opencv-python
Step 2: Capture Video Stream
We will start by capturing the video stream from a webcam or a pre-recorded video.
import cv2
# Capture video from the webcam
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
cv2.imshow(“Video Stream”, frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Step 3: Convert Frames to Grayscale and Apply Gaussian Blur
To reduce noise and improve motion detection accuracy, we convert frames to grayscale and apply Gaussian blur.
def preprocess_frame(frame):
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
gray = cv2.GaussianBlur(gray, (21, 21), 0)
return gray
Step 4: Detect Motion
We compare the current frame with the previous frame to detect changes.
first_frame = None
while True:
ret, frame = cap.read()
if not ret:
break
gray = preprocess_frame(frame)
if first_frame is None:
first_frame = gray
continue
frame_diff = cv2.absdiff(first_frame, gray)
thresh = cv2.threshold(frame_diff, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
cv2.imshow(“Motion Detection”, thresh)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Step 5: Highlight Motion Using Contours
We use contours to highlight areas where motion is detected.
import numpy as np
while True:
ret, frame = cap.read()
if not ret:
break
gray = preprocess_frame(frame)
if first_frame is None:
first_frame = gray
continue
frame_diff = cv2.absdiff(first_frame, gray)
thresh = cv2.threshold(frame_diff, 25, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.dilate(thresh, None, iterations=2)
contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
for contour in contours:
if cv2.contourArea(contour) < 500:
continue
(x, y, w, h) = cv2.boundingRect(contour)
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
cv2.imshow(“Motion Detection”, frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Conclusion
With OpenCV, you can easily implement a real-time motion detection system by processing video frames, detecting changes, and highlighting motion regions using contours. You can further enhance this system by integrating it with alarms, notifications, or object tracking. Try experimenting with different threshold values and blur settings to refine detection accuracy!
Python OpenCV Master Edge Detection Fast
Edge detection is a crucial technique in computer vision, widely used in applications like object detection, image segmentation, and feature extraction. OpenCV makes it simple to implement edge detection with powerful algorithms like the Canny Edge Detector.
Step 1: Install OpenCV
Before we begin, ensure you have OpenCV installed. If not, install it using:
pip install opencv-python
Step 2: Load and Convert Image to Grayscale
Since edge detection works best in grayscale, we first load the image and convert it.
import cv2
# Load the image
image = cv2.imread(‘image.jpg’)
# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
Step 3: Apply Gaussian Blur
Blurring helps to reduce noise and improve edge detection accuracy.
# Apply Gaussian blur
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
Step 4: Perform Edge Detection Using Canny
The Canny edge detector is one of the most widely used edge detection techniques.
# Apply Canny Edge Detection
edges = cv2.Canny(blurred, 50, 150)
Step 5: Display the Result
cv2.imshow(‘Edges’, edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
Bonus: Edge Detection in Real-Time (Webcam)
To detect edges in real-time using a webcam, use the following code:
# Open webcam
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blurred = cv2.GaussianBlur(gray, (5, 5), 0)
edges = cv2.Canny(blurred, 50, 150)
cv2.imshow(‘Real-Time Edge Detection’, edges)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Conclusion
In just a few steps, you’ve mastered edge detection using OpenCV. This technique is essential for various image processing applications, from object recognition to medical imaging. Experiment with different threshold values to fine-tune detection for different images!
Python OpenCV Magic: Transform Images Like a Pro
Python’s OpenCV library is a powerful tool for image processing, offering a wide range of functions to manipulate and transform images effortlessly. Whether you’re a beginner or an experienced developer, OpenCV allows you to apply effects, enhance images, and extract useful information with just a few lines of code. In this article, we’ll explore some of the most useful OpenCV techniques that can transform your images like a pro.
- Reading and Displaying Images
Before applying any transformations, we first need to load and display images using OpenCV.
Code Example:
import cv2
image = cv2.imread(‘image.jpg’)
cv2.imshow(‘Original Image’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Converting to Grayscale
Many image processing tasks require grayscale images. Converting an image to grayscale reduces computational complexity and enhances edge detection.
Code Example:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow(‘Grayscale Image’, gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Resizing and Cropping
Resizing and cropping images are essential for pre-processing before feeding them into a model.
Code Example:
resized = cv2.resize(image, (300, 300))
cropped = image[50:200, 100:300]
cv2.imshow(‘Resized Image’, resized)
cv2.imshow(‘Cropped Image’, cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Applying Filters (Blurring and Sharpening)
Blurring smooths out noise, while sharpening enhances edges.
Blurring Example:
blurred = cv2.GaussianBlur(image, (15, 15), 0)
cv2.imshow(‘Blurred Image’, blurred)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sharpening Example:
import numpy as np
kernel = np.array([[0, -1, 0], [-1, 5,-1], [0, -1, 0]])
sharpened = cv2.filter2D(image, -1, kernel)
cv2.imshow(‘Sharpened Image’, sharpened)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Edge Detection with Canny Algorithm
Edge detection is useful for object detection and feature extraction.
Code Example:
edges = cv2.Canny(image, 100, 200)
cv2.imshow(‘Edges’, edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Image Thresholding for Binarization
Thresholding converts images into binary format, which is useful for shape detection.
Code Example:
_, binary = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
cv2.imshow(‘Binary Image’, binary)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Contour Detection
Contours are useful for detecting objects in an image.
Code Example:
contours, _ = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0, 255, 0), 2)
cv2.imshow(‘Contours’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Morphological Transformations (Erosion & Dilation)
Erosion and dilation are used to enhance or suppress image features.
Erosion Example:
kernel = np.ones((5,5), np.uint8)
eroded = cv2.erode(binary, kernel, iterations=1)
cv2.imshow(‘Eroded Image’, eroded)
cv2.waitKey(0)
cv2.destroyAllWindows()
Dilation Example:
dilated = cv2.dilate(binary, kernel, iterations=1)
cv2.imshow(‘Dilated Image’, dilated)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Image Perspective Transformation
Perspective transformation allows us to change the viewpoint of an image.
Code Example:
pts1 = np.float32([[50, 50], [200, 50], [50, 200], [200, 200]])
pts2 = np.float32([[10, 100], [180, 50], [100, 250], [250, 250]])
M = cv2.getPerspectiveTransform(pts1, pts2)
warped = cv2.warpPerspective(image, M, (300, 300))
cv2.imshow(‘Warped Image’, warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Face Detection with OpenCV
OpenCV has a built-in face detector that can be used to detect faces in an image.
Code Example:
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_frontalface_default.xml’)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow(‘Face Detection’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Conclusion
With OpenCV, you can perform a wide range of image transformations to enhance, analyze, and manipulate images efficiently. Whether you’re working on a computer vision project or just exploring image processing, OpenCV provides a simple yet powerful framework to get started. Experiment with these techniques and take your image processing skills to the next level!
Python OpenCV Hand Gesture Recognition Trick
Hand gesture recognition is an exciting computer vision application that allows interaction with devices using hand movements. With OpenCV, we can create a simple yet effective hand gesture recognition system.
Step 1: Install OpenCV and Mediapipe
Ensure the required libraries are installed:
pip install opencv-python mediapipe numpy
Step 2: Import Libraries and Initialize Mediapipe
Mediapipe is a powerful library for real-time hand tracking.
import cv2
import mediapipe as mp
mp_hands = mp.solutions.hands
mp_draw = mp.solutions.drawing_utils
hands = mp_hands.Hands(min_detection_confidence=0.7, min_tracking_confidence=0.7)
Step 3: Capture Video Feed
Open a video stream to detect hands in real time:
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame = cv2.flip(frame, 1) # Flip for mirror effect
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = hands.process(rgb_frame)
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_draw.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
cv2.imshow(“Hand Gesture Recognition”, frame)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Step 4: Recognizing Specific Gestures
By analyzing landmark positions, we can classify different gestures. Here’s an example of recognizing an open palm:
def is_open_palm(hand_landmarks):
thumb_tip = hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].y
index_tip = hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y
middle_tip = hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y
ring_tip = hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].y
pinky_tip = hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].y
return (index_tip < thumb_tip and middle_tip < thumb_tip and
ring_tip < thumb_tip and pinky_tip < thumb_tip)
Modify the video loop to check for gestures:
if results.multi_hand_landmarks:
for hand_landmarks in results.multi_hand_landmarks:
mp_draw.draw_landmarks(frame, hand_landmarks, mp_hands.HAND_CONNECTIONS)
if is_open_palm(hand_landmarks):
cv2.putText(frame, “Open Palm Detected”, (50, 50),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
Conclusion
Using OpenCV and Mediapipe, we can recognize hand gestures in real-time and map them to actions. Expand this by adding gesture-based commands for controlling applications, games, or IoT devices!
Python OpenCV Magic: Transform Images Like a Pro
Python’s OpenCV library is a powerful tool for image processing, offering a wide range of functions to manipulate and transform images effortlessly. Whether you’re a beginner or an experienced developer, OpenCV allows you to apply effects, enhance images, and extract useful information with just a few lines of code. In this article, we’ll explore some of the most useful OpenCV techniques that can transform your images like a pro.
- Reading and Displaying Images
Before applying any transformations, we first need to load and display images using OpenCV.
Code Example:
import cv2
image = cv2.imread(‘image.jpg’)
cv2.imshow(‘Original Image’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Converting to Grayscale
Many image processing tasks require grayscale images. Converting an image to grayscale reduces computational complexity and enhances edge detection.
Code Example:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow(‘Grayscale Image’, gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Resizing and Cropping
Resizing and cropping images are essential for pre-processing before feeding them into a model.
Code Example:
resized = cv2.resize(image, (300, 300))
cropped = image[50:200, 100:300]
cv2.imshow(‘Resized Image’, resized)
cv2.imshow(‘Cropped Image’, cropped)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Applying Filters (Blurring and Sharpening)
Blurring smooths out noise, while sharpening enhances edges.
Blurring Example:
blurred = cv2.GaussianBlur(image, (15, 15), 0)
cv2.imshow(‘Blurred Image’, blurred)
cv2.waitKey(0)
cv2.destroyAllWindows()
Sharpening Example:
import numpy as np
kernel = np.array([[0, -1, 0], [-1, 5,-1], [0, -1, 0]])
sharpened = cv2.filter2D(image, -1, kernel)
cv2.imshow(‘Sharpened Image’, sharpened)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Edge Detection with Canny Algorithm
Edge detection is useful for object detection and feature extraction.
Code Example:
edges = cv2.Canny(image, 100, 200)
cv2.imshow(‘Edges’, edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Image Thresholding for Binarization
Thresholding converts images into binary format, which is useful for shape detection.
Code Example:
_, binary = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY)
cv2.imshow(‘Binary Image’, binary)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Contour Detection
Contours are useful for detecting objects in an image.
Code Example:
contours, _ = cv2.findContours(binary, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
cv2.drawContours(image, contours, -1, (0, 255, 0), 2)
cv2.imshow(‘Contours’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Morphological Transformations (Erosion & Dilation)
Erosion and dilation are used to enhance or suppress image features.
Erosion Example:
kernel = np.ones((5,5), np.uint8)
eroded = cv2.erode(binary, kernel, iterations=1)
cv2.imshow(‘Eroded Image’, eroded)
cv2.waitKey(0)
cv2.destroyAllWindows()
Dilation Example:
dilated = cv2.dilate(binary, kernel, iterations=1)
cv2.imshow(‘Dilated Image’, dilated)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Image Perspective Transformation
Perspective transformation allows us to change the viewpoint of an image.
Code Example:
pts1 = np.float32([[50, 50], [200, 50], [50, 200], [200, 200]])
pts2 = np.float32([[10, 100], [180, 50], [100, 250], [250, 250]])
M = cv2.getPerspectiveTransform(pts1, pts2)
warped = cv2.warpPerspective(image, M, (300, 300))
cv2.imshow(‘Warped Image’, warped)
cv2.waitKey(0)
cv2.destroyAllWindows()
- Face Detection with OpenCV
OpenCV has a built-in face detector that can be used to detect faces in an image.
Code Example:
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + ‘haarcascade_frontalface_default.xml’)
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.1, 4)
for (x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (255, 0, 0), 2)
cv2.imshow(‘Face Detection’, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Conclusion
With OpenCV, you can perform a wide range of image transformations to enhance, analyze, and manipulate images efficiently. Whether you’re working on a computer vision project or just exploring image processing, OpenCV provides a simple yet powerful framework to get started. Experiment with these techniques and take your image processing skills to the next level!
Python OpenCV Create Stunning Image Filters
Image filtering is a key technique in computer vision, enabling effects like blurring, sharpening, and edge detection. Using OpenCV, we can create stunning image filters with just a few lines of code.
Step 1: Install OpenCV
Ensure OpenCV is installed by running:
pip install opencv-python numpy
Step 2: Load and Display an Image
Start by loading an image using OpenCV:
import cv2
import numpy as np
# Load the image
image = cv2.imread(“sample.jpg”)
cv2.imshow(“Original Image”, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 3: Apply a Blurring Filter
Blurring removes noise and smoothens images. Gaussian blur is a popular choice:
blurred = cv2.GaussianBlur(image, (15, 15), 0)
cv2.imshow(“Blurred Image”, blurred)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 4: Apply Edge Detection
Edge detection highlights object boundaries in an image:
edges = cv2.Canny(image, 100, 200)
cv2.imshow(“Edge Detection”, edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 5: Convert Image to Pencil Sketch
Convert an image into a pencil sketch by blending grayscale and inverted blurred images:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
inverted = 255 – gray
blurred = cv2.GaussianBlur(inverted, (21, 21), 0)
sketch = cv2.divide(gray, 255 – blurred, scale=256)
cv2.imshow(“Pencil Sketch”, sketch)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 6: Apply a Sepia Effect
Sepia filters give images a warm, vintage look:
sepia_filter = np.array([[0.272, 0.534, 0.131],
[0.349, 0.686, 0.168],
[0.393, 0.769, 0.189]])
sepia_image = cv2.transform(image, sepia_filter)
sepia_image = np.clip(sepia_image, 0, 255)
cv2.imshow(“Sepia Effect”, sepia_image.astype(np.uint8))
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 7: Apply a Cartoon Effect
Cartoonizing an image involves bilateral filtering and edge detection:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
edges = cv2.adaptiveThreshold(cv2.medianBlur(gray, 7), 255,
cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 2)
color = cv2.bilateralFilter(image, 9, 300, 300)
cartoon = cv2.bitwise_and(color, color, mask=edges)
cv2.imshow(“Cartoon Effect”, cartoon)
cv2.waitKey(0)
cv2.destroyAllWindows()
Conclusion
With OpenCV, you can apply various image filters to enhance photos, detect edges, or create artistic effects like pencil sketches and cartoons. Experiment with different filters to create visually striking transformations!
Python OpenCV Convert Images to Cartoon Easily
Transforming images into cartoon-style visuals is a fun and creative application of OpenCV. With a few simple steps, you can achieve a cartoon effect by applying edge detection and smoothing techniques.
Step 1: Install OpenCV
Ensure you have OpenCV installed. If not, install it using:
pip install opencv-python
Step 2: Load the Image
First, we load the image that we want to convert into a cartoon.
import cv2
# Load the image
image = cv2.imread(‘image.jpg’)
cv2.imshow(“Original Image”, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 3: Convert Image to Grayscale
To simplify the processing, convert the image to grayscale.
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
cv2.imshow(“Grayscale Image”, gray)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 4: Apply Median Blur
Blurring the grayscale image helps remove noise and create a smooth effect.
blurred = cv2.medianBlur(gray, 5)
cv2.imshow(“Blurred Image”, blurred)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 5: Detect Edges Using Adaptive Thresholding
Edge detection is crucial for creating the outlines of the cartoon effect.
edges = cv2.adaptiveThreshold(blurred, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
cv2.imshow(“Edges”, edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 6: Apply Bilateral Filter for Smoothing
Bilateral filtering enhances color while preserving edges, giving a cartoon-like effect.
color = cv2.bilateralFilter(image, 9, 250, 250)
cv2.imshow(“Smoothed Image”, color)
cv2.waitKey(0)
cv2.destroyAllWindows()
Step 7: Combine Edges and Smoothed Image
Finally, merge the color image with the edges to create the final cartoon effect.
cartoon = cv2.bitwise_and(color, color, mask=edges)
cv2.imshow(“Cartoon Image”, cartoon)
cv2.waitKey(0)
cv2.destroyAllWindows()
Bonus: Convert Webcam Feed to Cartoon in Real-Time
If you want to apply this effect to a live video feed, use the following code:
cap = cv2.VideoCapture(0)
while True:
ret, frame = cap.read()
if not ret:
break
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
blurred = cv2.medianBlur(gray, 5)
edges = cv2.adaptiveThreshold(blurred, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 9, 9)
color = cv2.bilateralFilter(frame, 9, 250, 250)
cartoon = cv2.bitwise_and(color, color, mask=edges)
cv2.imshow(“Cartoon Video”, cartoon)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Conclusion
Using OpenCV, you can easily transform images into cartoon-like effects. Try experimenting with different parameters to get the desired artistic effect. Enjoy cartoonizing your images!
CONCULTION
We’ve trained a model called which interacts in a conversational way. The dialogue format makes it possible for questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.
Python OpenCV Build a Fun Face Swap Tool
Python OpenCV Build a Fun Face Swap Tool
Face swapping is a fascinating computer vision trick that allows you to swap faces between two people in real-time. Using OpenCV and dlib, we can build a simple face swap tool that works efficiently.
Step 1: Install Required Libraries
Make sure OpenCV and dlib are installed:
pip install opencv-python dlib numpy
Step 2: Import Libraries and Load Models
import cv2
import dlib
import numpy as np
# Load facial landmark predictor
detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)
Step 3: Define Helper Functions
Extract Facial Landmarks:
def get_landmarks(image):
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
faces = detector(gray)
if len(faces) == 0:
return None
return predictor(gray, faces[0])
Warp Face to Target:
def warp_face(source_img, target_img, landmarks_src, landmarks_tgt):
hull_index = cv2.convexHull(np.array(landmarks_tgt), returnPoints=False)
hull_src = [landmarks_src[i[0]] for i in hull_index]
hull_tgt = [landmarks_tgt[i[0]] for i in hull_index]
warp_matrix = cv2.estimateAffinePartial2D(np.array(hull_src), np.array(hull_tgt))[0]
warped_face = cv2.warpAffine(source_img, warp_matrix, (target_img.shape[1], target_img.shape[0]))
return warped_face
Step 4: Implement Face Swapping
def face_swap(source_img, target_img):
landmarks_src = get_landmarks(source_img)
landmarks_tgt = get_landmarks(target_img)
if landmarks_src is None or landmarks_tgt is None:
print(“No face detected!”)
return target_img
points_src = [(p.x, p.y) for p in landmarks_src.parts()]
points_tgt = [(p.x, p.y) for p in landmarks_tgt.parts()]
swapped_face = warp_face(source_img, target_img, points_src, points_tgt)
mask = np.zeros_like(target_img[:, :, 0])
cv2.fillConvexPoly(mask, np.array(points_tgt, dtype=np.int32), 255)
result = cv2.seamlessClone(swapped_face, target_img, mask, (target_img.shape[1]//2, target_img.shape[0]//2), cv2.NORMAL_CLONE)
return result
Step 5: Run Real-Time Face Swap
cap = cv2.VideoCapture(0)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
target_face = frame.copy() # Use a static image or another face
swapped = face_swap(target_face, frame)
cv2.imshow(“Face Swap Tool”, swapped)
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
cap.release()
cv2.destroyAllWindows()
Conclusion
This face swap tool demonstrates how OpenCV and dlib can be used for real-time facial transformations. You can enhance it further by swapping faces in videos or adding deep learning models for more realistic results!
