Skip to content
@janhq

Jan

An open source alternative to OpenAI that runs on your own computer or server

Popular repositories Loading

  1. jan jan Public

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)

    TypeScript 22.1k 1.3k

  2. cortex.cpp cortex.cpp Public

    Run and customize Local LLMs.

    C++ 1.9k 105

  3. cortex.tensorrt-llm cortex.tensorrt-llm Public

    Forked from NVIDIA/TensorRT-LLM

    Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.

    C++ 36 2

  4. model-converter model-converter Public archive

    Python 19 5

  5. docs docs Public

    Jan.ai Website & Documentation

    MDX 19 8

  6. cortex.llamacpp cortex.llamacpp Public

    cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.

    C++ 16 3

Repositories

Showing 10 of 34 repositories
  • cortex.cpp Public

    Run and customize Local LLMs.

    janhq/cortex.cpp’s past year of commit activity
    C++ 1,918 Apache-2.0 105 115 (3 issues need help) 11 Updated Sep 20, 2024
  • cortex.llamacpp Public

    cortex.llamacpp is a high-efficiency C++ inference engine for edge computing. It is a dynamic library that can be loaded by any server at runtime.

    janhq/cortex.llamacpp’s past year of commit activity
    C++ 16 AGPL-3.0 3 2 3 Updated Sep 20, 2024
  • jan Public

    Jan is an open source alternative to ChatGPT that runs 100% offline on your computer. Multiple engine support (llama.cpp, TensorRT-LLM)

    janhq/jan’s past year of commit activity
    TypeScript 22,098 AGPL-3.0 1,270 150 (1 issue needs help) 5 Updated Sep 19, 2024
  • janhq/cortex.so’s past year of commit activity
    TypeScript 4 1 5 4 Updated Sep 19, 2024
  • docs Public

    Jan.ai Website & Documentation

    janhq/docs’s past year of commit activity
  • cortex.tensorrt-llm Public Forked from NVIDIA/TensorRT-LLM

    Cortex.Tensorrt-LLM is a C++ inference library that can be loaded by any server at runtime. It submodules NVIDIA’s TensorRT-LLM for GPU accelerated inference on NVIDIA's GPUs.

    janhq/cortex.tensorrt-llm’s past year of commit activity
    C++ 36 Apache-2.0 922 7 3 Updated Sep 4, 2024
  • janhq/winget-pkgs’s past year of commit activity
    0 0 0 0 Updated Aug 23, 2024
  • janhq/homebrew-tap’s past year of commit activity
    Ruby 0 0 0 0 Updated Aug 22, 2024
  • ppa Public
    janhq/ppa’s past year of commit activity
    0 0 0 0 Updated Aug 21, 2024
  • janhq/tokenizer.cpp’s past year of commit activity
    C++ 2 AGPL-3.0 0 2 0 Updated Aug 13, 2024