Overview
This guide explains how to install, build, configure, and use Plugable Chat, the open source desktop application designed to run locally on systems connected to the Plugable TBT5-AI GPU enclosure.
You should complete the hardware GPU installation procedure first before proceeding with this software guide.
Plugable Chat is currently distributed as source code via GitHub and must be built locally. There are no pre-built signed installer packages at this time.
If you have questions at any point, contact Plugable support → support@plugable.com
Suggested Materials
- Host computer with Thunderbolt 5
- Plugable TBT5-AI enclosure with GPU installed
- Reliable internet connection
- Administrator privileges on your system
- At least 20GB free disk space for development tools and dependencies
Before You Begin
- Confirm the GPU is fully installed in the TBT5-AI enclosure.
- Connect the TBT5-AI to your host computer using a certified Thunderbolt 5 cable.
- Connect the enclosure to power.
- Power on the enclosure.
- Boot your computer.
Part 1 – Install GPU Drivers
Plugable Chat requires a properly installed GPU driver.
Windows (NVIDIA Example)
- Visit nvidia.com and download the latest driver for your GPU in the TBT5-AI.
- Run the installer.
- Choose Clean Installation.
- Restart your computer.
Part 2 – Download Plugable Chat Source Code
Plugable Chat repository:
https://github.com/PlugableTechnologies/plugable-chat
Step 1 – Install Git (if needed)
Windows:
winget install Git.Git
Step 2 – Clone the Repository
git clone https://github.com/PlugableTechnologies/plugable-chat.gitcd plugable-chat
Part 3 – Run the Platform Bootstrap Script
Plugable Chat provides platform bootstrap scripts that install all required dependencies automatically.
There is no separate installer path. The bootstrap script is the supported installation method.
Windows Installation
From inside the repository folder:
.\requirements.bat
This script will:
- Validate Windows version
- Check disk space
- Verify network connectivity
- Install Visual Studio Build Tools with C++ workload
- Install Node.js
- Install Rust
- Install Git
- Install Protocol Buffers
- Initialize Rust toolchain
- Run npm install
You may need to open a new command prompt and re-run .\requirements.bat several times to get all components installed.
Diagnostic mode (no installation):
.\requirements.bat --check
Once complete, start the app:
npx tauri dev
Part 4 – First Launch
When running:
npx tauri dev
The following occurs:
- React frontend builds
- Rust backend compiles
- Tauri desktop application launches
- Local database initializes
- Model configuration loads
On first launch, you may be prompted to:
- Select a model
- Configure local model endpoints
- Adjust settings
Part 5 – Using Plugable Chat
Starting a Chat Session
- Select your model from the model menu.
- Click New Chat.
- Enter your prompt.
- Press Enter or click Send.
Streaming responses will appear token-by-token.
Model Support
Plugable Chat supports multiple model families, including:
- OpenAI-compatible endpoints
- Gemma
- Granite
- Phi
- Other transformer-based models
Model-specific tool-calling formats are handled automatically by the backend.
Built-In Capabilities
Plugable Chat includes:
- Streaming response engine
- Local vector database (LanceDB)
- Python sandbox execution
- Tool calling loop
- MCP server integration
- Local-first storage of chat history
Troubleshooting
GPU Not Detected
- Confirm Thunderbolt connection.
- Confirm GPU drivers are installed.
- Check Device Manager or run nvidia-smi.
- Power cycle enclosure and system.
Build Fails on Windows
Common causes:
- C++ workload not installed in Visual Studio Build Tools.
- winget not installed.
- UAC prompt waiting behind other windows.
Run diagnostic mode:
.\requirements.bat --check
Rust Compilation Errors
Ensure Rust toolchain is initialized:
rustup update
Node Errors
Delete node_modules and reinstall:
rm -rf node_modulesnpm install
Windows:
rmdir /s node_modulesnpm install
Frequently Asked Questions (FAQ)
1. Is there a pre-built installer?
No. Plugable Chat must currently be built from source.
2. Do I need programming experience?
No coding is required for normal use. The bootstrap scripts automate setup.
3. Can I run this without a GPU?
Yes, but performance will be significantly reduced.
4. What GPU memory is recommended?
- 12GB VRAM for smaller models (7B class)
- 16–24GB for mid-sized models
- 24GB+ for larger models
5. Does this require internet access?
Only for initial dependency installation and model downloads. The app runs locally.
6. Where is chat history stored?
Locally on disk using LanceDB within the project directory.
7. Is the application secure?
All processing occurs locally. The Python sandbox restricts execution to a curated allowlist.
8. Can I deploy this in production?
Yes, but production builds are not yet code-signed. Enterprise deployments may require internal signing.
9. How do I completely uninstall?
Delete the repository folder and uninstall:
- Node.js
- Rust
- Visual Studio Build Tools (Windows)
10. Who do I contact for help?
Plugable Support → support@plugable.com
Include:
- Operating system
- GPU model
- Error messages
- Output logs
Conclusion
Once the hardware GPU installation is complete and Plugable Chat has been built and launched, your TBT5-AI enclosure becomes a high-performance local AI workstation capable of running advanced models, executing tools, and managing chat workflows entirely on your own system.
For assistance at any stage, contact support@plugable.com.