π₯ Star GridMaster on GitHub
Give it a β on GitHub to support the project and help others discover it!
π Welcome to GridMaster¶
Welcome to GridMaster β an advanced Python toolkit I built to automate hyperparameter tuning and model selection across multiple classifiers.
With just a few lines, GridMaster helps you:
β
Automatically optimize key classifiers over your dataset
β
Narrow down from broad, industry-recommended parameter grids
β
Fine-tune around best parameter ranges using smart linear or logarithmic scaling
β
Run multiple models in parallel, automatically selecting the top performer β no manual, repetitive grid search loops needed
β
Fully compatible with GridSearchCV workflows β migrate easily without added learning costs, including advanced settings like GPU acceleration
β
Balance system load and performance by default, using half of available CPU cores for parallel search β adjustable for advanced users (see Advanced Setting β CPU Parallelism (n_jobs))
βοΈ New in v0.5.x
- Smart
mode
selector: Switch between'fast'
(default) for lightweight search, or'industrial'
for expanded grids and deeper optimization.- Solver auto-selection for Logistic Regression: Automatically picks
'liblinear'
or'saga'
depending on dataset size and mode, enabling elasticnet where applicable.- Expanded coarse grids under
'industrial'
mode for XGBoost, LightGBM, CatBoost, Random Forest.βοΈ New in v0.3.x
Smart, Expert, and Custom fine-tuning modes:
Smart: Automatically selects top 2 impactful hyperparameters based on variation analysis.
Expert: Focuses on commonly sensitive parameters like
learning_rate
andmax_depth
.Custom: User-defined fine grids.
Parallel CPU support: Automatically detects available CPU cores and assigns half for faster parallel search (
n_jobs
), balancing speed and system performance. You can override this manually if needed.Enable GPU Acceleration: Directly pass GPU-specific flags (like tree_method='gpu_hist') to XGBoost, LightGBM, and CatBoost via
custom_estimator_params
.
π Supported Models¶
GridMaster currently supports classification models only:
β
Logistic Regression
β
Random Forest
β
XGBoost
β
LightGBM
β
CatBoost
β οΈ Note: Decision Trees are currently excluded, as they are rarely used in industrial hyperparameter optimization workflows.
ποΈ GridMaster is built on top of
scikit-learn
and integrates seamlessly with popular libraries likeXGBoost
,LightGBM
, andCatBoost
, providing a familiar interface for model tuning and evaluation.
π How It Works¶
-
Coarse Search
Starts with broad, commonly recommended parameter grids for each classifier (e.g., C, max_depth, learning_rate), providing a robust initial exploration of the search space.Mode Selection:
GridMaster now supports two modes:
-
'fast'
: Focused, minimal grids for fast iteration.-
'industrial'
: Wide grids with additional hyperparameters, better suited for production-scale optimization. -
Fine Search
Automatically refines parameter ranges around the best coarse result:
For linear-scale parameters, narrows range by Β±X% (default Β±50%)
For log-scale parameters (like C, learning_rate), adjusts intelligently on the log scale ensuring meaningful search coverage without wasting runs. -
Multi-Stage Search
Allows multiple fine-tuning rounds with custom precision, eliminating the need to manually loop grid searches for each model.By default, both Fine Search and Multi-Stage Search focus on the top 2 most impactful parameters (based on performance variation observed during the coarse search stage) for refinement.
-
Multi-Model Comparison
Trains and tunes all supported models in parallel, automatically identifies the top performer, and outputs detailed metrics and plots for interpretation.
β¨ Why I Built GridMaster¶
I designed GridMaster to free myself (and others) from the repetitive burden of per-model grid search, offering a clean, unified, and automated workflow for classification tasks.
It encapsulates the entire ML pipeline β from preprocessing and feature selection to model training and evaluation β ensuring reproducibility, modularity, and smooth end-to-end optimization.
βLaziness fuels productivity.β
π οΈ Get Started¶
- Check the Quickstart Guide
- Run your first multi-model search pipeline
- Visualize and compare model performances
- Dive into Essential Tools and Advanced Utilities
π About the Author¶
Hi, Iβm Winston Wang β a data scientist passionate about making a meaningful contribution to the world, one data-driven solution at a time.
For feedback or suggestions, feel free to email me at:
π§ mail@winston-wang.com
For more about me, please visit my personal website.