Build a Sleep Tracker with Opencv and Python

In this tutorial we will attempt to build a sleep tracker, to monitor our quality of sleep by tracking head position and movements of the head.

Excessive moving during sleeping would indicate disrupted sleep. The application theoretically should in a completed form, reveal if we slept well or poorly.

How should the app work?

We can categorize the head positions as right side against pillow, left side against pillow, back of head facing out, or front of face facing out. We also want to produce a graph of head movements throughout the night.

If you want to do this yourself, you can use a Jetson Nano Developer Kit or RaspberryPi with night-vision camera which uses infrared LEDs. Infrared is electromagnetic radiation with a wavelength longer than visible light, often used in night-vision and heat detection.

We will use a 7 hour recording of my sleep patterns for this application.

A frame from my 7 hour sleeping video

Analyzing the Video

I have created a Deep Learning model for determining the position of the head using YOLO.

Deep learning is a form of AI, allowing a program to learn for itself by mimicking a brain (neural network) to detect objects. YOLO is a real-time object detection system. We will import this model as HeadPosition.

import cv2
import numpy as np
from head_position import HeadPosition
import matplotlib.pyplot as plt
import math

cap = cv2.VideoCapture("sergio_sleeping.avi")

hp = HeadPosition()
classes = ["Left", "Right", "Front", "Back"]

We want to keep track of information for the head as the frames progress. The array for head position will tally the number of frames the head is in each position.

# Track head information
head_position = [0, 0, 0, 0]
head_distance_movement = []
total_frames = 0
center_prev = (0, 0)

As in previous examples, we will use a while loop to continuously cycle through the video frames. We include ret in this case, so when the video ends and there are no more frames, the loop will break.

while True:
    ret, frame =
    if ret is False:

    # Get head position
    ret, class_id, box, center = hp.get_head_position(frame)

If the head is detected, we want to extract that information. The box is the rectangle around the head, with 4 corners: x, y, w, h. The model will draw the rectangle around the head.

Head detected
    if ret:
        x, y, w, h = box
        cv2.putText(frame, classes[class_id], (x, y - 15), 0, 1.3, hp.colors[class_id], 3)
        cv2.rectangle(frame, (x, y), (x + w, y + h), hp.colors[class_id], 3), center, 5, hp.colors[class_id], 3)

Again this model will be looking to identify which of the 4 sides of the head, right, left, back, or front, is against the pillow. The class named class_id, assigns an index number to each side of the head:

  • 0 left side of head is on pillow
  • 1 right side of head is on pillow
  • 2 front of head is facing out
  • 3 back of head is facing out

Then we can also use the model to see each head position rectangle as a different color during review of the video.

To track the actual distance of head movement, we need to determine the center of the head in each frame. This center point will be marked with a circle. For example, if the position of the center moves from 50,50 to 40,40 the point moved 10 pixels over and 10 pixels down. We can determine the distance by finding the hypotenuse.

Head center (used to track the movements)

       # Update head information
       head_position[class_id] += 1
       total_frames += 1

       # get center head movement
       x, y = center
       distance = math.hypot(x - center_prev[0], y - center_prev[1])

       # Store current frame center
       center_prev = center

    # Show frames on screen
    cv2.imshow("Frame", frame)

    key = cv2.waitKey(1)
    if key == 27:

We can see below, the head position detected with a rectangle, identified wit text and its corresponding color as well as the center point of the head. The program is running and detecting well and now we can review and plot the data to make observations about sleep quality.

0 Left
1 Right
2 Front
3 Back

How well did you sleep? Plot Data on Graphs

We first plot the information concerning how many frames the head was in each position, into a bar graph. We will use the classes (which were used for labeling the rectangles) and head_position. Viewing the first graph below we can see, my head predominantly stayed in the right and left position (cheek against pillow).

For the second graph, we determined where the center of the head was in each frame and tracked the movement of that center point throughout the night. We also added the distance moved, to an array. When we plot this distance moved (y-axis) over time (x-axis), we can see the spikes in movement during certain sleeping hours.

This project has many areas for potential future development and if you find it interesting you can find the project on GitHub and are able to contribute!

# Plot sleeping information
head_position_hours = [x/3600 for x in head_position]
head_distance_hours = [x/3600 for x in range(len(head_distance_movement))]

fig, (ax1, ax2) = plt.subplots(1, 2), head_position_hours)
ax2.plot(head_distance_hours, head_distance_movement)

How can I contribute to this project?

The project is opensource and available on Github on this link:

You can contribute to the project in different ways, whether you’re a developer or not, by:

  1. Providing images for the dataset. The actual dataset is trained only on the images of my head (i took images of the 4 different sides left, right, front and back), but in order to work with every person we need a bigger dataset with images of different people.
  2. Sharing your idea. If you have ideas about how to improve this project in any way, whether giving suggestions about the graph, or about how to better track the movements and so on, feel free to share it.
  3. Writing code. If you’re a developer you can improve and extend the actual code with new functionalities.

Learn to build Computer Vision Software easily and efficiently.

This is a FREE Workshop where I'm going to break down the 4 steps that are necessary to build software to detect and track any object.

Sign UP for FREE