yolo11 seeb4coding

How to Set Up YOLO11 for Object Detection AI Model

YOLO (You Only Look Once) is a cutting-edge object detection framework widely used in computer vision tasks. This guide walks you through the process of setting up YOLO11, training it, and testing your model effectively.


Step 1: Clone the YOLO Repository

Begin by cloning the YOLO repository from GitHub. This repository contains all the essential files required for setting up and running YOLO.

git clone https://github.com/ultralytics/ultralytics

Step 2: Open the Cloned Repository in VS Code

After cloning the repository, open it in Visual Studio Code (VS Code) or your preferred code editor. This will allow you to easily configure and execute the necessary scripts.


Step 3: Create a Python Virtual Environment

It’s best practice to use a virtual environment to isolate your project dependencies. Use the following command to create a Python virtual environment:

py -3.10 -m venv .venv310

Step 4: Activate the Virtual Environment And Install

Activate the virtual environment using PowerShell:

.venv310\Scripts\Activate.ps1

Then Install this

pip install ultralytics torch torchvision

Once activated, your terminal should display the environment’s name, ensuring your commands are executed within it.


Step 5: Create a coco8.yaml File

Create a new YAML file named coco8.yaml in the root directory of the project. Use the following content:

# Ultralytics YOLO 🚀, AGPL-3.0 license
# COCO 2017 dataset https://cocodataset.org by Microsoft
# Documentation: https://docs.ultralytics.com/datasets/detect/coco/
# Example usage: yolo train data=coco.yaml
# parent
# ├── ultralytics
# └── datasets
#     └── coco  ← downloads here (20.1 GB)

# Train/val/test sets as 1) dir: path/to/imgs, 2) file: path/to/imgs.txt, or 3) list: [path/to/imgs1, path/to/imgs2, ..]
path: ../datasets/coco # dataset root dir
train: train2017.txt # train images (relative to 'path') 118287 images
val: val2017.txt # val images (relative to 'path') 5000 images
test: test-dev2017.txt # 20288 of 40670 images, submit to https://competitions.codalab.org/competitions/20794

# Classes
names:
  0: person
  1: bicycle
  2: car
  3: motorcycle
  4: airplane
  5: bus
  6: train
  7: truck
  8: boat
  9: traffic light
  10: fire hydrant
  11: stop sign
  12: parking meter
  13: bench
  14: bird
  15: cat
  16: dog
  17: horse
  18: sheep
  19: cow
  20: elephant
  21: bear
  22: zebra
  23: giraffe
  24: backpack
  25: umbrella
  26: handbag
  27: tie
  28: suitcase
  29: frisbee
  30: skis
  31: snowboard
  32: sports ball
  33: kite
  34: baseball bat
  35: baseball glove
  36: skateboard
  37: surfboard
  38: tennis racket
  39: bottle
  40: wine glass
  41: cup
  42: fork
  43: knife
  44: spoon
  45: bowl
  46: banana
  47: apple
  48: sandwich
  49: orange
  50: broccoli
  51: carrot
  52: hot dog
  53: pizza
  54: donut
  55: cake
  56: chair
  57: couch
  58: potted plant
  59: bed
  60: dining table
  61: toilet
  62: tv
  63: laptop
  64: mouse
  65: remote
  66: keyboard
  67: cell phone
  68: microwave
  69: oven
  70: toaster
  71: sink
  72: refrigerator
  73: book
  74: clock
  75: vase
  76: scissors
  77: teddy bear
  78: hair drier
  79: toothbrush

# Download script/URL (optional)
download: |
  from ultralytics.utils.downloads import download
  from pathlib import Path

  # Download labels
  segments = True  # segment or box labels
  dir = Path(yaml['path'])  # dataset root dir
  url = 'https://github.com/ultralytics/assets/releases/download/v0.0.0/'
  urls = [url + ('coco2017labels-segments.zip' if segments else 'coco2017labels.zip')]  # labels
  download(urls, dir=dir.parent)
  # Download data
  urls = ['http://images.cocodataset.org/zips/train2017.zip',  # 19G, 118k images
          'http://images.cocodataset.org/zips/val2017.zip',  # 1G, 5k images
          'http://images.cocodataset.org/zips/test2017.zip']  # 7G, 41k images (optional)
  download(urls, dir=dir / 'images', threads=3)

This file specifies your dataset path, classes, and other details required for training.


Step 6: Train the Model Without GPU

Run the following command in your terminal to start training your model. If you don’t have a GPU, this will utilize your CPU instead.

yolo task=detect mode=train data=coco8.yaml model=yolo11n.pt epochs=100 imgsz=640 device=cpu

This command will initiate the training process, using the YOLO11 model architecture and your defined dataset.


Step 7: Locate the Trained Model

After the training process is complete, the trained model files will be available in the following directory:

...\ultralytics\runs\detect

You can find the weights and logs for your trained model here.


Step 8: Test Your Model

Using the Command Line Interface (CLI)

To test your trained model on new images or videos, use the following command:

yolo task=detect mode=predict model=..\ultralytics\runs\detect\train7\weights\best.pt source=path/to/your/input

Using a Python Script

Alternatively, you can test your model using Python. Create a file (e.g., test_model.py) with the following script:

from ultralytics import YOLO

# Load the model
model = YOLO(r"..\ultralytics\runs\detect\train7\weights\best.pt")

# Predict on an image or video
results = model.predict(source="path/to/your/input.jpg", save=True)

# Display results (optional)
results.show()

Run the script, and your model will process the input image or video.


Step 9: Locate the Predicted Output Files

Once testing is complete, the output predictions will be stored in the following directory:

...\ultralytics\runs\detect\predict

Check this folder to view the predicted results.


Conclusion

This step-by-step guide helps you set up YOLO11 for object detection, covering training, testing, and locating outputs. Whether you’re a beginner or an experienced developer, these steps ensure a smooth workflow.

Feel free to explore further by tweaking parameters, adding custom datasets, or integrating YOLO into your applications. If you have questions or insights, share them in the comments below!

Leave a Reply

Your email address will not be published. Required fields are marked *