Skip to content

Latest commit

 

History

History
44 lines (39 loc) · 1.31 KB

README.md

File metadata and controls

44 lines (39 loc) · 1.31 KB

End-to-End-ML-Recipe (employee burnout prediction)

Setup conda env

Setup conda virtual environment from the environment.yml file of this repo.

conda env create -f environment.yml
conda activate emp_burnout

Setup package env

Install the codebase as a package (setup.py)

pip install .

Usage

  • For running the servers (MLFlow server, Minio, NGINX), start docker-compose as:

    docker-compose up 
    • Make sure to configure the volumes (check host paths) in the docker-compose.yml file as needed.
    • The server is only needed for tracking purposes. If you don't need mlflow tracking, you can opt to ignore this step.
  • For the client REST apis, start the app with uvicorn using:

    uvicorn emp_burnout.app:app
  • Head over to localhost:8000/docs to view the swagger ui for the exposed REST apis.

  • For the job config files, refer to configs/train.yml and configs/predict.yml.

    • The train.yml also contains the hyperparameters for training. One can modify them too.

TODOS:

  • Groundwork
  • Environment setup
  • Ingestion to DB
  • Docker setup
  • MLFlow Server setup
  • Training Job
  • Batch prediction Job
  • Single input prediction Job
  • Cleanup
  • Pydantic config parsing
  • REST APIs
  • Update README