Fikra Model Series

Highly optimized ternary-weight models designed for consumer hardware. State-of-the-art reasoning with 1.58-bit quantization.

Model Name Parameters Quantization VRAM/RAM Context Action
Fikra-1B-Nano
v0.2 • Release Candidate
1.2B w1.58 (Ternary) ~2.1 GB 4k HuggingFace
Fikra-3B-Edge
In Training
3.0B w1.58 ~4.5 GB 8k Coming Q2
Updated: Feb 2026 SHA256 Verified

Installation

The Lacesse Python SDK handles model downloading, caching, and inference automatically.

$ pip install fikra

Hardware Requirements

  • Apple Silicon: M1, M2, M3 (Native Metal support)
  • Linux/x86: AVX2 compatible CPU (Intel i5 8th gen+)
  • Raspberry Pi: Pi 5 (8GB) Recommended

Cookie Consent

We use minimal cookies for system performance. Fikra models run locally on your device, ensuring zero data egress.