Skip to content

πŸ•ŠοΈ a mesmerizing engine for handling realtime streaming process in a fully async and concurrent manner with Redis, RMQ and Kafka over WebSocket

Notifications You must be signed in to change notification settings

wildonion/hoopoe

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ᝰ.ᐟ What am i?

i'm hoopoe, a realtime social event platform allows your hoop get heard!

Execution flow & system design?

Tip

any notification coming from different components or other service actor workers must be done accroding to the following steps:

  • step0) a register notif api can be written to register either a producer or a consumer in the bakcground.

  • step1) producer service actor sends NotifData to exchange.

  • step2) consumer service actor receives NotifData from its queue bounded to the exchange.

  • step3) instance of NotifData is cached on redis and stored in db.

  • step4) client invokes /notif/get/owner/ api to get its notification during the app execution in a short polling manner or through ws streaming.

   ------------------ server1/node1 actor -----------------                                         ___________
  |                                                        |                   ____________________|           |
  |   ___________                            ___________   |                  |                    |           |
  |  |           |                          |           |  |    OVER WS STREAM/SHORT POLLING       |           |
  |  |           |                          |           |  |                  |                    |   CLIENT  |
  |  |  Actor1   |-----message handlers-----|  Actor2   |  |------------------------- HTTP --------|           |
  |  |  tokio    |     |___ jobq ___|       |   tokio   |  |             |                         |           |
  |  | threadpool|--rmq prodcons channels---| threadpool|  |             |                          -----------
  |  |           |                          |           |  |             |
  |   -----------                            -----------   |             |
   --------------------------------------------------------              |
  |  |                                                                   |
  |  |    synchronisation with rmq notif broker                          |
  |  |_____ actor prodcons, data format: NotifData                       |
  |                              |    ----------------- server2/node2 actor ------------------
  |                              |   |                                                        |
  |                              |   |                                                        |
  |                              |   |   ___________                            ___________   |
  |                              |   |  |           |                          |           |  |
  |                              |___|  |           |                          |           |  |
  |        _________                 |  |  Actor1   |-----message handlers-----|  Actor2   |  |
  |       |         |                |  |  tokio    |     |___ jobq ___|       |  tokio    |  |
   -------|   PG    |                |  | threadpool|--rmq prodcons channels---| threadpool|  |
          |  redis  |____mutators____|  |           |                          |           |  |
          |         |____readers ____|   -----------                            -----------   |
          |         |                |                                                        |
           ---------                  --------------------------------------------------------

routes and apis

πŸ₯› HOOPOE WEBSOCKET STREAMING HTTP ROUTE   ==> https://event.api.hoopoe.app/stream
πŸ₯› HOOPOE WEBSOCKET STREAMING WS ROUTE     ==> wss://event.api.hoopoe.app/stream
πŸ›€οΈ HOOPOE HTTP APIs                        ==> https://api.hoopoe.app/
πŸ›€οΈ HOOPOE gRPC APIs                        ==> grpcs://grpc.api.hoopoe.app/
πŸ›’οΈ HOOPOE ADMINER                          ==> https://adminer.hoopoe.app
πŸ‘¨πŸ»β€πŸ’» HOOPOE DBEAVER                          ==> https://dbeaver.hoopoe.app
β›΅ HOOPOE PRTAINER                         ==> https://portainer.hoopoe.app
πŸ“Š HOOPOE GRAFANA                          ==> https://grafana.hoopoe.app
πŸš₯ HOOPOE RMQ                              ==> https://rmq.hoopoe.app
πŸ—žοΈ HOOPOE LOGS                             ==> https://api.hoopoe.app/logs
πŸ—‚οΈ HOOPOE ASSETS FOLDER                    ==> https://api.hoopoe.app/assets
πŸ€ SWAGGER UI                              ==> https://api.hoopoe.app/swagger
πŸ€ RAPIDOC UI                              ==> https://api.hoopoe.app/rapidoc
πŸ€ SCALAR UI                               ==> https://api.hoopoe.app/scalar
πŸ€ REDOC UI                                ==> https://api.hoopoe.app/redoc

πŸ—ƒοΈ wikis, docs, erds, schemas and collections

Rust Ownership and Borrowing Rules

ERD Schema

HTTP Postman Collection

How 2 setup, develop, and deploy?

Note

if you want to deploy as a publisher or producer service then get your hands dirty by developing the apis/http, actors/producers, as a subscriber or consumer however, develop the apis/http, actors/consumers folders.

Dev env

# -----------------------
# ---- read/write access
sudo chmod -R 777 . 

step0) install necessary packages on Linux:

cd scripts && ./setup.sh

step1) create database

make sure you've created the database using:

sqlx database create

step2) create migration folder (make sure you have it already!)

make sure you uncomment the runtime setup inside its Cargo.toml file.

sea-orm-cli migrate init -d migration

step3) create migration files

make sure you've installed the sea-orm-cli then create migration file per each table operation, contains Migrations structure with up and down methods extended by the MigrationTrait interface, take not that you must create separate migration per each db operation when you're in production.

sea-orm-cli migrate generate "sql_process_name_like_table_name_or_add_column"

step4) apply pending migrations (fresh, refresh, reset, up or down)

once you've done with adding changes in your migration files just run the following to apply them in db.

# rollback all applied migrations, then reapply all migrations
sea-orm-cli migrate refresh # or up

step5) generate entity files for ORM operations

generate Rust structures from your applied migrations inside the db for the hoopoe database in entity/src folder after that you can proceed with editing each eintity like adding hooks for different actions on an active model.

# generate entities for the database and all tables
sea-orm-cli generate entity -u postgres://postgres:geDteDd0Ltg2135FJYQ6rjNYHYkGQa70@localhost/hoopoe -o src/entities --with-serde both --serde-skip-deserializing-primary-key
# generate entity for an sepecific table only, eg: generating entity for hoops table
sea-orm-cli generate entity -t hoops -o src/entities --with-serde both --serde-skip-deserializing-primary-key
# don't skip deserializing primary key
sea-orm-cli generate entity -u postgres://postgres:geDteDd0Ltg2135FJYQ6rjNYHYkGQa70@localhost/hoopoe -o src/entities --with-serde both

step6) run server

when you run server with --fresh command it'll fresh all migrations at startup (drop all tables from the database, then reapply all migrations) otherwise it'll only apply migrations (calling up method of all migration files).

# -------------------------------
# ------ hoopoe server --------
# -------------------------------
# launch as http with freshing db
cargo run --bin hoopoe -- --server http --fresh # default is http and fresh migrations
# or see help
cargo run --bin hoopoe -- --help

πŸš€ Prod env (the CI/CD approach):

Important

make sure you've opened all necessary domains inside your DNS panel per each nginx config file and changed the hoopoe.app to your own domain name in every where mostly the nginx config files and the APP_NAME in consts.rs. this approach can be used if you need a fully automatic deployment process, it uses github actions to build and publish all images on a self-hosted docker registry on a custom VPS, so update the github ci/cd workflow files inside .github/workflows folder to match your VPS infos eventually on every push the ci/cd process will begin to building and pushing automatically the docker images to the self-hosted registry. instead of using a custom registry you can use either ducker hub or github packages as well! it's notable that you should renew nginx service everytime you add a new domain or subdomain (do this on adding a new domain), ./renew.sh script will create ssl certificates with certbot for your new domain and add it inside the infra/docker/nginx folder so nginx docker can copy them into its own container. for every new domain there must be its ssl certs and nginx config file inside that folder so make sure you've setup all your domains before pushing to the repo. continue reading...

you can build and up all images on your machine with sudo docker compose up -d --build command.

Important

if you want to run the server over HTTP3 just make sure that ssl certificate keys are setup properly inside the infra/http3/certs folder, namely hoopoecert.pem and hoopoekey.pem.

🚨 me before you!

Caution

make sure you've done following configurations properly before pushing to your repo:

  • step0) sudo docker network create hoopoe_is_there1 && sudo docker network create hoopoe_is_there2 then create database with the same name inside the .env file on the VPS using command.

  • step1) generate new ssl dh params for nginx using openssl dhparam -out infra/docker/nginx/ssl-dhparams.pem 4096 command.

  • step2) setup ssl certs using renew.sh script and nginx config files per each domain and subdomain then put them inside infra/docker/nginx folder, you MUST do this before you get pushed to the repo on github cause there is already an nginx container inside the docker-compose.yml needs its files to be there to move them into the container on every push!

  • step3) you would probably want to make logs dir and docker.hoopoe.app routes secure and safe, you can achieve this by adding an auth gaurd on the docker registry subdomain and the logs dir inside their nginx config files eventually setup the password for them by running sudo apt-get install -y apache2-utils && sudo htpasswd -c infra/docker/nginx/.htpasswd hoopoe command, the current one is hoopoe@1234.

  • step4) run sudo rm .env && sudo mv .env.prod .env then update necessary variables inside .env file.

  • step5) connect your device to github for workflow actions using gh auth login -s workflow, this allows you to push to the repo.

  • step6) setup DOCKER_PASSWORD, DOCKER_USERNAME, SERVER_HOST, SERVER_USER and SERVER_PASSWORD secrets on your repository, (DOCKER_PASSWORD and DOCKER_USERNAME are nginx login info to access the docker.hoopoe.app url).

  • step7) created a /root/hoopoe folder on your VPS containing the docker-compose.yml file only and update its path inside the cicd.yml file in ssh action part where you're changing directory to where the docker compose file is in.

  • step8) make sure the docker registry service is up and running on your VPS (run sudo docker run -d -p 5000:5000 --restart always --name registry registry:2) and you have an already setup the docker.hoopoe.app subdomain for that which is pointing to the http://localhost:5000. use this command to run a registry docker: sudo docker run -d -p 5000:5000 --restart always --name registry registry:2 then login to your hub with sudo docker login docker.hoopoe.app command (use the nginx username and password, note that if you've not setup a username and password no need to do the login process! simply run the push and pull on VPS).

  • step9) each internal image name inside your compose file must be prefixed with your docker hub registry endpoint doing so tells docker to pull images from there cause as we know this subdoamin is already pointing to the docker registry hosted on localhost:5000 on VPS.

current hub registry is set to docker.hoopoe.app and the /root/hoopoe folder on the VPS would be the place where the docker-compose.yml file is in

make sure you've logged in with sudo cause cicd.yml is building, pushing and pulling images with sudo docker ... command, if you are not running the docker with sudo make sure there is no sudo in cicd.yml file.

Tip

additionally you can push a docker image to your custom docker registry manually:

sudo docker login docker.hoopoe.app # login to the registry
sudo docker tag hoopoe-http docker.hoopoe.app/hoopoe-http # tag the image first
sudo docker push docker.hoopoe.app/hoopoe-http # push to the registry

login to the VPS and put the docker-compose.yml in the app directory (/root/hoopoe) then run:

cd /root/hoopoe/ && sudo docker compose -f "docker-compose.yml" up -d

automatically it'll pull the images from the specified registry in image key inside the compose file, just make sure that the image tag name is the one inside the docker-compose.yml file.

β˜• What's happening inside the cicd.yml file?

  • step1) it reads the codes inside the repository to find the docker-compose.yml file.

  • step1) tries to login (docker username and password) to your custom docker hub (the registry on your VPS secured with nginx auth gaurd).

  • step2) builda all docker container images inside your docker-compose.yml file.

  • step3) eventually it pushes them to your custom docker hub registry.

  • step4) does ssh to the VPS and cd to where you've put the docker-compose.yml file in there then pull and up all pushed docker containers from the VPS hub inside the VPS.

last but not least!

use postman or swagger ui to check the server health, continue with registering notif producer and consumer.

About

πŸ•ŠοΈ a mesmerizing engine for handling realtime streaming process in a fully async and concurrent manner with Redis, RMQ and Kafka over WebSocket

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages