Skip to content

Logeshwaran-KS/Medicinal-Plants-Detection-Using-Machine-Learning

Repository files navigation

Medicinal Plants Detection Using Machine Learning ☘️

Machine Learning / Deep Learning: Project 1

This project focuses on Medicinal Plants Detection using Machine Learning techniques to identify various medicinal plants through images. The process involves Segmentation, Gray Scale Conversion, and advanced Feature Extraction methods such as GLCM, Gabor, and LBP. Multiple models are trained and evaluated to identify the most suitable one for prediction.

Project Overview 📄

The primary objective is to develop a reliable model capable of accurately identifying medicinal plants from images. The project emphasizes manual feature extraction techniques without relying on neural networks, ensuring high performance across various machine learning models.


Data Collection 📦

The dataset comprises images of 30 different plant species, each labeled with both common and botanical names. These images vary in size, providing a comprehensive dataset for robust classification tasks.


Preprocessing 🛠️

1. Segmentation ✂️

Images are segmented using the HSV (Hue, Saturation, Value) color space to isolate the relevant plant regions, removing unnecessary background.

ImgSeg

2. Gray Scale Conversion

The segmented images are then converted to grayscale to simplify further processing while preserving essential details.

GrayScale

3. Sobel Filter 📐

A Sobel Filter is applied to highlight the edges in the images, making them more suitable for feature extraction.

Sobel

Feature Extraction 🔎

Features are extracted from both segmented and grayscale images using the following techniques:

1. Local Binary Patterns (LBP) 🔢

LBP converts an image into a binary pattern by comparing neighboring pixels to the center pixel. This binary code is then converted to a decimal value, serving as a texture descriptor.

2. Gray-Level Co-occurrence Matrix (GLCM) 📈

GLCM analyzes pixel pairs within a specific spatial relationship, creating a matrix that reflects their frequency. This matrix is used to derive texture features such as contrast, correlation, energy, and homogeneity.

3. Gabor Filters 👋

Gabor filters process the image by convolving it with a sinusoidal wave modulated by a Gaussian envelope. These filters are sensitive to specific image features, such as edges and textures, at various scales and orientations.

4. Color Moments 🌈

Color moments, including mean, standard deviation, and skewness, capture the color distribution in each channel (e.g., RGB). These moments are highly effective for image classification tasks.

Note: A total of 62 features are extracted from each image and stored in a FeatureExtracted.csv file. If you require this file, please contact me using the information provided at the end.


Feature Reduction ✂️

To improve model performance and reduce dimensionality, the dataset is split into training and test sets, and the following techniques are applied:

1. Principal Component Analysis (PCA) 📊

PCA identifies the principal components that capture the most variance in the data, reducing the dimensionality while retaining essential information.

2. StandardScaler ⚖️

The StandardScaler normalizes the data by transforming each feature to have a mean of 0 and a standard deviation of 1, ensuring consistency during model training.


Model Training 💻

The extracted features are used to train multiple models, with a focus on identifying the best-performing one.

Model Comparison 📊

ModelComp

Support Vector Machine (SVM) 🚀

SVM achieved the highest accuracy of 99%, outperforming all other models. Its ability to find the optimal hyperplane that maximizes class separation makes it particularly effective for this classification task.


Prediction 🔮

Real-time data can be used for prediction by leveraging pre-trained models, including PCA for dimensionality reduction, StandardScaler for normalization, and SVM for classification. The process involves:

  1. Data Preprocessing: Incoming data is normalized using the saved StandardScaler to ensure consistency.
  2. Dimensionality Reduction: The data is transformed using the saved PCA model, retaining the most significant features.
  3. Prediction: The transformed data is fed into the saved SVM model to generate real-time predictions.

This approach ensures efficient and accurate predictions on new, unseen data by utilizing the computational efficiency and patterns learned by the pre-trained models.


Contact 📫

If you have any questions, need additional data, or have suggestions or feedback, feel free to contact me:


Thank You for Checking Out This Project! 😄