πŸ›Έ Vensa 1.1B (GGUF)

Flutter Expert CPU Optimized License

Vensa 1.1B is a lightweight, specialized AI model fine-tuned for Flutter Development and Dart Programming.

This GGUF version is optimized for high-performance inference on local machines and edge devices using llama.cpp. It balances a small footprint with deep knowledge of mobile app architecture and widget implementation.

πŸ‘€ Developer Information

πŸ›  Model Details

  • Architecture: Llama-based (1.1B Parameters)
  • Format: GGUF (Optimized for CPU & Mobile)
  • Quantization: Q4_K_M (High efficiency with minimal quality loss)
  • Primary Focus: Dart syntax, Flutter widget trees, state management (Provider, Riverpod, Bloc), and mobile UI/UX patterns

πŸ”’ API Implementation

While the model weights are open-source (GGUF), the Vensa API is currently a private service used for specialized integrations.

If you are interested in collaboration or custom API access, please contact the developer directly.

πŸ’» How to Use (Python / llama-cpp-python)

To run this model locally, first install the dependencies:

pip install llama-cpp-python huggingface-hub
Then run the following Python script:

Python

from llama_cpp import Llama
from huggingface_hub import hf_hub_download

# 1. Download the GGUF model
model_path = hf_hub_download(
    repo_id="prawinn04/vensa-1.1b-gguf",
    filename="vensa-1.1b.gguf"
)

# 2. Load the model
llm = Llama(
    model_path=model_path,
    n_ctx=2048,
    n_threads=4  # Adjust based on your CPU cores
)

# 3. Run a Flutter-related prompt
output = llm(
    "### Instruction:\nExplain how to use a ListView.builder in Flutter.\n\n### Response:",
    max_tokens=512,
    stop=["### Instruction:"]
)

print(output['choices'][0]['text'])
βš–οΈ License
This model is released under the MIT License.
Downloads last month
10
GGUF
Model size
1B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for prawinn04/vensa-1.1b-gguf

Quantized
(22)
this model