A 3 billion-parameter Llama 3.2 model fine-tuned end-to-end for general engineering and computer-architecture tasks.

Key Specifications:


Model Size: 3 B parameters

Quantization: 4-bit (INT4)

Frameworks: PyTorch & Hugging Face Transformers

Deployment: On-device inference & cloud

Training data: Engineering textbooks, lecture notes, Logisim circuit diagrams

Why It Matters

Versatile & Accurate
Strikes the sweet spot between footprint and capability—ideal for prototyping, code reviews, and architecture Q&A.

Domain-Focused
Trained on real-world computer-engineering resources (schematics, datasheets, lab write-ups) to deliver actionable, context-aware responses.

Easy Integration
Optimized for edge devices and microservices alike—drop it into your CI/CD pipeline, mobile tooling, or web demo.

For non-commercial use under Meta’s license (see GitHub)

Quick start example

Get Started

đź”— GitHub Repository

For more details on how to use it and for downloading it you can find the model on GitHub which you can find below